• Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • What happens to DOGE without Elon Musk?

    Elon Musk may be gone from the Trump administration — and his friendship status with President Donald Trump may be at best uncertain — but his whirlwind stint in government certainly left its imprint. The Department of Government Efficiency, his pet government-slashing project, remains entrenched in Washington. During his 130-day tenure, Musk led DOGE in eliminating about 260,000 federal employee jobs and gutting agencies supporting scientific research and humanitarian aid. But to date, DOGE claims to have saved the government billion — well short of its ambitioustarget of cutting at least trillion from the federal budget. And with Musk’s departure still fresh, there are reports that the federal government is trying to rehire federal workers who quit or were let go. For Elaine Kamarck, senior fellow at the Brookings Institution, DOGE’s tactics will likely end up being disastrous in the long run. “DOGE came in with these huge cuts, which were not attached to a plan,” she told Today, Explained co-host Sean Rameswaram. Kamarck knows all about making government more efficient. In the 1990s, she ran the Clinton administration’s Reinventing Government program. “I was Elon Musk,” she told Today, Explained. With the benefit of that experience, she assesses Musk’s record at DOGE, and what, if anything, the billionaire’s loud efforts at cutting government spending added up to. Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
    What do you think Elon Musk’s legacy is? Well, he will not have totally, radically reshaped the federal government. Absolutely not. In fact, there’s a high probability that on January 20, 2029, when the next president takes over, the federal government is about the same size as it is now, and is probably doing the same stuff that it’s doing now. What he did manage to do was insert chaos, fear, and loathing into the federal workforce. There was reporting in the Washington Post late last week that these cuts were so ineffective that the White House is actually reaching out to various federal employees who were laid off and asking them to come back, from the FDA to the IRS to even USAID. Which cuts are sticking at this point and which ones aren’t?First of all, in a lot of cases, people went to court and the courts have reversed those earlier decisions. So the first thing that happened is, courts said, “No, no, no, you can’t do it this way. You have to bring them back.” The second thing that happened is that Cabinet officers started to get confirmed by the Senate. And remember that a lot of the most spectacular DOGE stuff was happening in February. In February, these Cabinet secretaries were preparing for their Senate hearings. They weren’t on the job. Now that their Cabinet secretary’s home, what’s happening is they’re looking at these cuts and they’re saying, “No, no, no! We can’t live with these cuts because we have a mission to do.”As the government tries to hire back the people they fired, they’re going to have a tough time, and they’re going to have a tough time for two reasons. First of all, they treated them like dirt, and they’ve said a lot of insulting things. Second, most of the people who work for the federal government are highly skilled. They’re not paper pushers. We have computers to push our paper, right? They’re scientists. They’re engineers. They’re people with high skills, and guess what? They can get jobs outside the government. So there’s going to be real lasting damage to the government from the way they did this. And it’s analogous to the lasting damage that they’re causing at universities, where we now have top scientists who used to invent great cures for cancer and things like that, deciding to go find jobs in Europe because this culture has gotten so bad.What happens to this agency now? Who’s in charge of it?Well, what they’ve done is DOGE employees have been embedded in each of the organizations in the government, okay? And they basically — and the president himself has said this — they basically report to the Cabinet secretaries. So if you are in the Transportation Department, you have to make sure that Sean Duffy, who’s the secretary of transportation, agrees with you on what you want to do. And Sean Duffy has already had a fight during a Cabinet meeting with Elon Musk. You know that he has not been thrilled with the advice he’s gotten from DOGE. So from now on, DOGE is going to have to work hand in hand with Donald Trump’s appointed leaders.And just to bring this around to what we’re here talking about now, they’re in this huge fight over wasteful spending with the so-called big, beautiful bill. Does this just look like the government as usual, ultimately?It’s actually worse than normal. Because the deficit impacts are bigger than normal. It’s adding more to the deficit than previous bills have done. And the second reason it’s worse than normal is that everybody is still living in a fantasy world. And the fantasy world says that somehow we can deal with our deficits by cutting waste, fraud, and abuse. That is pure nonsense. Let me say it: pure nonsense.Where does most of the government money go? Does it go to some bureaucrats sitting on Pennsylvania Avenue? It goes to us. It goes to your grandmother and her Social Security and her Medicare. It goes to veterans in veterans benefits. It goes to Americans. That’s why it’s so hard to cut it. It’s so hard to cut it because it’s us. And people are living on it. Now, there’s a whole other topic that nobody talks about, and it’s called entitlement reform, right? Could we reform Social Security? Could we make the retirement age go from 67 to 68? That would save a lot of money. Could we change the cost of living? Nobody, nobody, nobody is talking about that. And that’s because we are in this crazy, polarized environment where we can no longer have serious conversations about serious issues. See More:
    #what #happens #doge #without #elon
    What happens to DOGE without Elon Musk?
    Elon Musk may be gone from the Trump administration — and his friendship status with President Donald Trump may be at best uncertain — but his whirlwind stint in government certainly left its imprint. The Department of Government Efficiency, his pet government-slashing project, remains entrenched in Washington. During his 130-day tenure, Musk led DOGE in eliminating about 260,000 federal employee jobs and gutting agencies supporting scientific research and humanitarian aid. But to date, DOGE claims to have saved the government billion — well short of its ambitioustarget of cutting at least trillion from the federal budget. And with Musk’s departure still fresh, there are reports that the federal government is trying to rehire federal workers who quit or were let go. For Elaine Kamarck, senior fellow at the Brookings Institution, DOGE’s tactics will likely end up being disastrous in the long run. “DOGE came in with these huge cuts, which were not attached to a plan,” she told Today, Explained co-host Sean Rameswaram. Kamarck knows all about making government more efficient. In the 1990s, she ran the Clinton administration’s Reinventing Government program. “I was Elon Musk,” she told Today, Explained. With the benefit of that experience, she assesses Musk’s record at DOGE, and what, if anything, the billionaire’s loud efforts at cutting government spending added up to. Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify. What do you think Elon Musk’s legacy is? Well, he will not have totally, radically reshaped the federal government. Absolutely not. In fact, there’s a high probability that on January 20, 2029, when the next president takes over, the federal government is about the same size as it is now, and is probably doing the same stuff that it’s doing now. What he did manage to do was insert chaos, fear, and loathing into the federal workforce. There was reporting in the Washington Post late last week that these cuts were so ineffective that the White House is actually reaching out to various federal employees who were laid off and asking them to come back, from the FDA to the IRS to even USAID. Which cuts are sticking at this point and which ones aren’t?First of all, in a lot of cases, people went to court and the courts have reversed those earlier decisions. So the first thing that happened is, courts said, “No, no, no, you can’t do it this way. You have to bring them back.” The second thing that happened is that Cabinet officers started to get confirmed by the Senate. And remember that a lot of the most spectacular DOGE stuff was happening in February. In February, these Cabinet secretaries were preparing for their Senate hearings. They weren’t on the job. Now that their Cabinet secretary’s home, what’s happening is they’re looking at these cuts and they’re saying, “No, no, no! We can’t live with these cuts because we have a mission to do.”As the government tries to hire back the people they fired, they’re going to have a tough time, and they’re going to have a tough time for two reasons. First of all, they treated them like dirt, and they’ve said a lot of insulting things. Second, most of the people who work for the federal government are highly skilled. They’re not paper pushers. We have computers to push our paper, right? They’re scientists. They’re engineers. They’re people with high skills, and guess what? They can get jobs outside the government. So there’s going to be real lasting damage to the government from the way they did this. And it’s analogous to the lasting damage that they’re causing at universities, where we now have top scientists who used to invent great cures for cancer and things like that, deciding to go find jobs in Europe because this culture has gotten so bad.What happens to this agency now? Who’s in charge of it?Well, what they’ve done is DOGE employees have been embedded in each of the organizations in the government, okay? And they basically — and the president himself has said this — they basically report to the Cabinet secretaries. So if you are in the Transportation Department, you have to make sure that Sean Duffy, who’s the secretary of transportation, agrees with you on what you want to do. And Sean Duffy has already had a fight during a Cabinet meeting with Elon Musk. You know that he has not been thrilled with the advice he’s gotten from DOGE. So from now on, DOGE is going to have to work hand in hand with Donald Trump’s appointed leaders.And just to bring this around to what we’re here talking about now, they’re in this huge fight over wasteful spending with the so-called big, beautiful bill. Does this just look like the government as usual, ultimately?It’s actually worse than normal. Because the deficit impacts are bigger than normal. It’s adding more to the deficit than previous bills have done. And the second reason it’s worse than normal is that everybody is still living in a fantasy world. And the fantasy world says that somehow we can deal with our deficits by cutting waste, fraud, and abuse. That is pure nonsense. Let me say it: pure nonsense.Where does most of the government money go? Does it go to some bureaucrats sitting on Pennsylvania Avenue? It goes to us. It goes to your grandmother and her Social Security and her Medicare. It goes to veterans in veterans benefits. It goes to Americans. That’s why it’s so hard to cut it. It’s so hard to cut it because it’s us. And people are living on it. Now, there’s a whole other topic that nobody talks about, and it’s called entitlement reform, right? Could we reform Social Security? Could we make the retirement age go from 67 to 68? That would save a lot of money. Could we change the cost of living? Nobody, nobody, nobody is talking about that. And that’s because we are in this crazy, polarized environment where we can no longer have serious conversations about serious issues. See More: #what #happens #doge #without #elon
    WWW.VOX.COM
    What happens to DOGE without Elon Musk?
    Elon Musk may be gone from the Trump administration — and his friendship status with President Donald Trump may be at best uncertain — but his whirlwind stint in government certainly left its imprint. The Department of Government Efficiency (DOGE), his pet government-slashing project, remains entrenched in Washington. During his 130-day tenure, Musk led DOGE in eliminating about 260,000 federal employee jobs and gutting agencies supporting scientific research and humanitarian aid. But to date, DOGE claims to have saved the government $180 billion — well short of its ambitious (and frankly never realistic) target of cutting at least $2 trillion from the federal budget. And with Musk’s departure still fresh, there are reports that the federal government is trying to rehire federal workers who quit or were let go. For Elaine Kamarck, senior fellow at the Brookings Institution, DOGE’s tactics will likely end up being disastrous in the long run. “DOGE came in with these huge cuts, which were not attached to a plan,” she told Today, Explained co-host Sean Rameswaram. Kamarck knows all about making government more efficient. In the 1990s, she ran the Clinton administration’s Reinventing Government program. “I was Elon Musk,” she told Today, Explained. With the benefit of that experience, she assesses Musk’s record at DOGE, and what, if anything, the billionaire’s loud efforts at cutting government spending added up to. Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify. What do you think Elon Musk’s legacy is? Well, he will not have totally, radically reshaped the federal government. Absolutely not. In fact, there’s a high probability that on January 20, 2029, when the next president takes over, the federal government is about the same size as it is now, and is probably doing the same stuff that it’s doing now. What he did manage to do was insert chaos, fear, and loathing into the federal workforce. There was reporting in the Washington Post late last week that these cuts were so ineffective that the White House is actually reaching out to various federal employees who were laid off and asking them to come back, from the FDA to the IRS to even USAID. Which cuts are sticking at this point and which ones aren’t?First of all, in a lot of cases, people went to court and the courts have reversed those earlier decisions. So the first thing that happened is, courts said, “No, no, no, you can’t do it this way. You have to bring them back.” The second thing that happened is that Cabinet officers started to get confirmed by the Senate. And remember that a lot of the most spectacular DOGE stuff was happening in February. In February, these Cabinet secretaries were preparing for their Senate hearings. They weren’t on the job. Now that their Cabinet secretary’s home, what’s happening is they’re looking at these cuts and they’re saying, “No, no, no! We can’t live with these cuts because we have a mission to do.”As the government tries to hire back the people they fired, they’re going to have a tough time, and they’re going to have a tough time for two reasons. First of all, they treated them like dirt, and they’ve said a lot of insulting things. Second, most of the people who work for the federal government are highly skilled. They’re not paper pushers. We have computers to push our paper, right? They’re scientists. They’re engineers. They’re people with high skills, and guess what? They can get jobs outside the government. So there’s going to be real lasting damage to the government from the way they did this. And it’s analogous to the lasting damage that they’re causing at universities, where we now have top scientists who used to invent great cures for cancer and things like that, deciding to go find jobs in Europe because this culture has gotten so bad.What happens to this agency now? Who’s in charge of it?Well, what they’ve done is DOGE employees have been embedded in each of the organizations in the government, okay? And they basically — and the president himself has said this — they basically report to the Cabinet secretaries. So if you are in the Transportation Department, you have to make sure that Sean Duffy, who’s the secretary of transportation, agrees with you on what you want to do. And Sean Duffy has already had a fight during a Cabinet meeting with Elon Musk. You know that he has not been thrilled with the advice he’s gotten from DOGE. So from now on, DOGE is going to have to work hand in hand with Donald Trump’s appointed leaders.And just to bring this around to what we’re here talking about now, they’re in this huge fight over wasteful spending with the so-called big, beautiful bill. Does this just look like the government as usual, ultimately?It’s actually worse than normal. Because the deficit impacts are bigger than normal. It’s adding more to the deficit than previous bills have done. And the second reason it’s worse than normal is that everybody is still living in a fantasy world. And the fantasy world says that somehow we can deal with our deficits by cutting waste, fraud, and abuse. That is pure nonsense. Let me say it: pure nonsense.Where does most of the government money go? Does it go to some bureaucrats sitting on Pennsylvania Avenue? It goes to us. It goes to your grandmother and her Social Security and her Medicare. It goes to veterans in veterans benefits. It goes to Americans. That’s why it’s so hard to cut it. It’s so hard to cut it because it’s us. And people are living on it. Now, there’s a whole other topic that nobody talks about, and it’s called entitlement reform, right? Could we reform Social Security? Could we make the retirement age go from 67 to 68? That would save a lot of money. Could we change the cost of living? Nobody, nobody, nobody is talking about that. And that’s because we are in this crazy, polarized environment where we can no longer have serious conversations about serious issues. See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Is NASA Ready for Death in Space?

    June 3, 20255 min readAre We Ready for Death in Space?NASA has quietly taken steps to prepare for a death in space. We need to ask how nations will deal with this inevitability now, as more people start traveling off the planetBy Peter Cummings edited by Lee Billings SciePro/Science Photo Library/Getty ImagesIn 2012 NASA stealthily slipped a morgue into orbit.No press release. No fanfare. Just a sealed, soft-sided pouch tucked in a cargo shipment to the International Space Stationalongside freeze-dried meals and scientific gear. Officially, it was called the Human Remains Containment Unit. To the untrained eye it looked like a shipping bag for frozen cargo. But to NASA it marked something far more sobering: a major advance in preparing for death beyond Earth.As a kid, I obsessed over how astronauts went to the bathroom in zero gravity. Now, decades later, as a forensic pathologist and a perennial applicant to NASA’s astronaut corps, I find myself fixated on a darker, more haunting question:On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.What would happen if an astronaut died out there? Would they be brought home, or would they be left behind? If they expired on some other world, would that be their final resting place? If they passed away on a spacecraft or space station, would their remains be cast off into orbit—or sent on an escape-velocity voyage to the interstellar void?NASA, it turns out, has begun working out most of these answers. And none too soon. Because the question itself is no longer if someone will die in space—but when.A Graying CorpsNo astronaut has ever died of natural causes off-world. In 1971 the three-man crew of the Soviet Soyuz 11 mission asphyxiated in space when their spacecraft depressurized shortly before its automated atmospheric reentry—but their deaths were only discovered once the spacecraft landed on Earth. Similarly, every U.S. spaceflight fatality to date has occurred within Earth’s atmosphere—under gravity, oxygen and a clear national jurisdiction. That matters, because it means every spaceflight mortality has played out in familiar territory.But planned missions are getting longer, with destinations beyond low-Earth orbit. And NASA’s astronaut corps is getting older. The average age now hovers around 50—an age bracket where natural death becomes statistically relevant, even for clean-living fitness buffs. Death in space is no longer a thought experiment. It’s a probability curve—and NASA knows it.In response, the agency is making subtle but decisive moves. The most recent astronaut selection cycle was extended—not only to boost intake but also to attract younger crew members capable of handling future long-duration missions.NASA’s Space MorgueIf someone were to die aboard the ISS today, their body would be placed in the HRCU, which would then be sealed and secured in a nonpressurized area to await eventual return to Earth.The HRCU itself is a modified version of a military-grade body bag designed to store human remains in hazardous environments. It integrates with refrigeration systems already aboard the ISS to slow decomposition and includes odor-control filters and moisture-absorbent linings, as well as reversed zippers for respectful access at the head. There are straps to secure the body in a seat for return, and patches for name tags and national flags.Cadaver tests conducted in 2019 at Sam Houston State University have proved the system durable. Some versions held for over 40 days before decomposition breached the barrier. NASA even drop-tested the bag from 19 feet to simulate a hard landing.But it’s never been used in space. And since no one yet knows how a body decomposes in true microgravity, no one can really say whether the HRCU would preserve tissue well enough for a forensic autopsy.This is a troubling knowledge gap, because in space, a death isn’t just a tragic loss—it’s also a vital data point. Was an astronaut’s demise from a fluke of their physiology, or an unavoidable stroke of cosmic bad luck—or was it instead a consequence of flaws in a space habitat’s myriad systems that might be found and fixed? Future lives may depend on understanding what went wrong, via a proper postmortem investigation.But there’s no medical examiner in orbit. So NASA trains its crews in something called the In-Mission Forensic Sample Collection protocol. The space agency’s astronauts may avoid talking about it, but they all have it memorized: Document everything, ideally with real-time guidance from NASA flight surgeons. Photograph the body. Collect blood and vitreous fluid, as well as hair and tissue samples. Only then can the remains be stowed in the HRCU.NASA has also prepared for death outside the station—on spacewalks, the moon or deep space missions. If a crew member perishes in vacuum but their remains are retrieved, the body is wrapped in a specially designed space shroud.The goal isn’t just a technical matter of preventing contamination. It’s psychological, too, as a way of preserving dignity. Of all the “firsts” any space agency hopes to achieve, the first-ever human corpse drifting into frame on a satellite feed is not among them.If a burial must occur—in lunar regolith or by jettisoning into solar orbit—the body will be dutifully tracked and cataloged, treated forevermore as a hallowed artifact of space history.Such gestures are also of relevance to NASA’s plans for off-world mourning; grief and memorial protocols are now part of official crew training. If a death occurs, surviving astronauts are tasked with holding a simple ceremony to honor the fallen—then to move on with their mission.Uncharted RealmsSo far we’ve only covered the “easy” questions. NASA and others are still grappling with harder ones.Consider the issue of authority over a death and mortal remains. On the ISS, it’s simple: the deceased astronaut’s home country retains jurisdiction. But that clarity fades as destinations grow more distant and the voyages more diverse: What really happens on space-agency missions to the moon, or to Mars? How might rules change for commercial or multinational spaceflights—or, for that matter, the private space stations and interplanetary settlements that are envisioned by Elon Musk, Jeff Bezos and other tech multibillionaires?NASA and its partners have started drafting frameworks, like the Artemis Accords—agreements signed by more than 50 nations to govern behavior in space. But even those don’t address many intimate details of death.What happens, for instance, if foul play is suspected?The Outer Space Treaty, a legal document drafted in 1967 under the United Nations that is humanity’s foundational set of rules for orbit and beyond, doesn’t say.Of course, not everything can be planned for in advance. And NASA has done an extraordinary job of keeping astronauts in orbit alive. But as more people venture into space, and as the frontier stretches to longer voyages and farther destinations, it becomes a statistical certainty that sooner or later someone won’t come home.When that happens, it won’t just be a tragedy. It will be a test. A test of our systems, our ethics and our ability to adapt to a new dimension of mortality. To some, NASA’s preparations for astronautical death may seem merely morbid, even silly—but that couldn’t be further from the truth.Space won’t care of course, whenever it claims more lives. But we will. And rising to that grim occasion with reverence, rigor and grace will define not just policy out in the great beyond—but what it means to be human there, too.
    #nasa #ready #death #space
    Is NASA Ready for Death in Space?
    June 3, 20255 min readAre We Ready for Death in Space?NASA has quietly taken steps to prepare for a death in space. We need to ask how nations will deal with this inevitability now, as more people start traveling off the planetBy Peter Cummings edited by Lee Billings SciePro/Science Photo Library/Getty ImagesIn 2012 NASA stealthily slipped a morgue into orbit.No press release. No fanfare. Just a sealed, soft-sided pouch tucked in a cargo shipment to the International Space Stationalongside freeze-dried meals and scientific gear. Officially, it was called the Human Remains Containment Unit. To the untrained eye it looked like a shipping bag for frozen cargo. But to NASA it marked something far more sobering: a major advance in preparing for death beyond Earth.As a kid, I obsessed over how astronauts went to the bathroom in zero gravity. Now, decades later, as a forensic pathologist and a perennial applicant to NASA’s astronaut corps, I find myself fixated on a darker, more haunting question:On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.What would happen if an astronaut died out there? Would they be brought home, or would they be left behind? If they expired on some other world, would that be their final resting place? If they passed away on a spacecraft or space station, would their remains be cast off into orbit—or sent on an escape-velocity voyage to the interstellar void?NASA, it turns out, has begun working out most of these answers. And none too soon. Because the question itself is no longer if someone will die in space—but when.A Graying CorpsNo astronaut has ever died of natural causes off-world. In 1971 the three-man crew of the Soviet Soyuz 11 mission asphyxiated in space when their spacecraft depressurized shortly before its automated atmospheric reentry—but their deaths were only discovered once the spacecraft landed on Earth. Similarly, every U.S. spaceflight fatality to date has occurred within Earth’s atmosphere—under gravity, oxygen and a clear national jurisdiction. That matters, because it means every spaceflight mortality has played out in familiar territory.But planned missions are getting longer, with destinations beyond low-Earth orbit. And NASA’s astronaut corps is getting older. The average age now hovers around 50—an age bracket where natural death becomes statistically relevant, even for clean-living fitness buffs. Death in space is no longer a thought experiment. It’s a probability curve—and NASA knows it.In response, the agency is making subtle but decisive moves. The most recent astronaut selection cycle was extended—not only to boost intake but also to attract younger crew members capable of handling future long-duration missions.NASA’s Space MorgueIf someone were to die aboard the ISS today, their body would be placed in the HRCU, which would then be sealed and secured in a nonpressurized area to await eventual return to Earth.The HRCU itself is a modified version of a military-grade body bag designed to store human remains in hazardous environments. It integrates with refrigeration systems already aboard the ISS to slow decomposition and includes odor-control filters and moisture-absorbent linings, as well as reversed zippers for respectful access at the head. There are straps to secure the body in a seat for return, and patches for name tags and national flags.Cadaver tests conducted in 2019 at Sam Houston State University have proved the system durable. Some versions held for over 40 days before decomposition breached the barrier. NASA even drop-tested the bag from 19 feet to simulate a hard landing.But it’s never been used in space. And since no one yet knows how a body decomposes in true microgravity, no one can really say whether the HRCU would preserve tissue well enough for a forensic autopsy.This is a troubling knowledge gap, because in space, a death isn’t just a tragic loss—it’s also a vital data point. Was an astronaut’s demise from a fluke of their physiology, or an unavoidable stroke of cosmic bad luck—or was it instead a consequence of flaws in a space habitat’s myriad systems that might be found and fixed? Future lives may depend on understanding what went wrong, via a proper postmortem investigation.But there’s no medical examiner in orbit. So NASA trains its crews in something called the In-Mission Forensic Sample Collection protocol. The space agency’s astronauts may avoid talking about it, but they all have it memorized: Document everything, ideally with real-time guidance from NASA flight surgeons. Photograph the body. Collect blood and vitreous fluid, as well as hair and tissue samples. Only then can the remains be stowed in the HRCU.NASA has also prepared for death outside the station—on spacewalks, the moon or deep space missions. If a crew member perishes in vacuum but their remains are retrieved, the body is wrapped in a specially designed space shroud.The goal isn’t just a technical matter of preventing contamination. It’s psychological, too, as a way of preserving dignity. Of all the “firsts” any space agency hopes to achieve, the first-ever human corpse drifting into frame on a satellite feed is not among them.If a burial must occur—in lunar regolith or by jettisoning into solar orbit—the body will be dutifully tracked and cataloged, treated forevermore as a hallowed artifact of space history.Such gestures are also of relevance to NASA’s plans for off-world mourning; grief and memorial protocols are now part of official crew training. If a death occurs, surviving astronauts are tasked with holding a simple ceremony to honor the fallen—then to move on with their mission.Uncharted RealmsSo far we’ve only covered the “easy” questions. NASA and others are still grappling with harder ones.Consider the issue of authority over a death and mortal remains. On the ISS, it’s simple: the deceased astronaut’s home country retains jurisdiction. But that clarity fades as destinations grow more distant and the voyages more diverse: What really happens on space-agency missions to the moon, or to Mars? How might rules change for commercial or multinational spaceflights—or, for that matter, the private space stations and interplanetary settlements that are envisioned by Elon Musk, Jeff Bezos and other tech multibillionaires?NASA and its partners have started drafting frameworks, like the Artemis Accords—agreements signed by more than 50 nations to govern behavior in space. But even those don’t address many intimate details of death.What happens, for instance, if foul play is suspected?The Outer Space Treaty, a legal document drafted in 1967 under the United Nations that is humanity’s foundational set of rules for orbit and beyond, doesn’t say.Of course, not everything can be planned for in advance. And NASA has done an extraordinary job of keeping astronauts in orbit alive. But as more people venture into space, and as the frontier stretches to longer voyages and farther destinations, it becomes a statistical certainty that sooner or later someone won’t come home.When that happens, it won’t just be a tragedy. It will be a test. A test of our systems, our ethics and our ability to adapt to a new dimension of mortality. To some, NASA’s preparations for astronautical death may seem merely morbid, even silly—but that couldn’t be further from the truth.Space won’t care of course, whenever it claims more lives. But we will. And rising to that grim occasion with reverence, rigor and grace will define not just policy out in the great beyond—but what it means to be human there, too. #nasa #ready #death #space
    WWW.SCIENTIFICAMERICAN.COM
    Is NASA Ready for Death in Space?
    June 3, 20255 min readAre We Ready for Death in Space?NASA has quietly taken steps to prepare for a death in space. We need to ask how nations will deal with this inevitability now, as more people start traveling off the planetBy Peter Cummings edited by Lee Billings SciePro/Science Photo Library/Getty ImagesIn 2012 NASA stealthily slipped a morgue into orbit.No press release. No fanfare. Just a sealed, soft-sided pouch tucked in a cargo shipment to the International Space Station (ISS) alongside freeze-dried meals and scientific gear. Officially, it was called the Human Remains Containment Unit (HRCU). To the untrained eye it looked like a shipping bag for frozen cargo. But to NASA it marked something far more sobering: a major advance in preparing for death beyond Earth.As a kid, I obsessed over how astronauts went to the bathroom in zero gravity. Now, decades later, as a forensic pathologist and a perennial applicant to NASA’s astronaut corps, I find myself fixated on a darker, more haunting question:On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.What would happen if an astronaut died out there? Would they be brought home, or would they be left behind? If they expired on some other world, would that be their final resting place? If they passed away on a spacecraft or space station, would their remains be cast off into orbit—or sent on an escape-velocity voyage to the interstellar void?NASA, it turns out, has begun working out most of these answers. And none too soon. Because the question itself is no longer if someone will die in space—but when.A Graying CorpsNo astronaut has ever died of natural causes off-world. In 1971 the three-man crew of the Soviet Soyuz 11 mission asphyxiated in space when their spacecraft depressurized shortly before its automated atmospheric reentry—but their deaths were only discovered once the spacecraft landed on Earth. Similarly, every U.S. spaceflight fatality to date has occurred within Earth’s atmosphere—under gravity, oxygen and a clear national jurisdiction. That matters, because it means every spaceflight mortality has played out in familiar territory.But planned missions are getting longer, with destinations beyond low-Earth orbit. And NASA’s astronaut corps is getting older. The average age now hovers around 50—an age bracket where natural death becomes statistically relevant, even for clean-living fitness buffs. Death in space is no longer a thought experiment. It’s a probability curve—and NASA knows it.In response, the agency is making subtle but decisive moves. The most recent astronaut selection cycle was extended—not only to boost intake but also to attract younger crew members capable of handling future long-duration missions.NASA’s Space MorgueIf someone were to die aboard the ISS today, their body would be placed in the HRCU, which would then be sealed and secured in a nonpressurized area to await eventual return to Earth.The HRCU itself is a modified version of a military-grade body bag designed to store human remains in hazardous environments. It integrates with refrigeration systems already aboard the ISS to slow decomposition and includes odor-control filters and moisture-absorbent linings, as well as reversed zippers for respectful access at the head. There are straps to secure the body in a seat for return, and patches for name tags and national flags.Cadaver tests conducted in 2019 at Sam Houston State University have proved the system durable. Some versions held for over 40 days before decomposition breached the barrier. NASA even drop-tested the bag from 19 feet to simulate a hard landing.But it’s never been used in space. And since no one yet knows how a body decomposes in true microgravity (or, for that matter, on the moon), no one can really say whether the HRCU would preserve tissue well enough for a forensic autopsy.This is a troubling knowledge gap, because in space, a death isn’t just a tragic loss—it’s also a vital data point. Was an astronaut’s demise from a fluke of their physiology, or an unavoidable stroke of cosmic bad luck—or was it instead a consequence of flaws in a space habitat’s myriad systems that might be found and fixed? Future lives may depend on understanding what went wrong, via a proper postmortem investigation.But there’s no medical examiner in orbit. So NASA trains its crews in something called the In-Mission Forensic Sample Collection protocol. The space agency’s astronauts may avoid talking about it, but they all have it memorized: Document everything, ideally with real-time guidance from NASA flight surgeons. Photograph the body. Collect blood and vitreous fluid, as well as hair and tissue samples. Only then can the remains be stowed in the HRCU.NASA has also prepared for death outside the station—on spacewalks, the moon or deep space missions. If a crew member perishes in vacuum but their remains are retrieved, the body is wrapped in a specially designed space shroud.The goal isn’t just a technical matter of preventing contamination. It’s psychological, too, as a way of preserving dignity. Of all the “firsts” any space agency hopes to achieve, the first-ever human corpse drifting into frame on a satellite feed is not among them.If a burial must occur—in lunar regolith or by jettisoning into solar orbit—the body will be dutifully tracked and cataloged, treated forevermore as a hallowed artifact of space history.Such gestures are also of relevance to NASA’s plans for off-world mourning; grief and memorial protocols are now part of official crew training. If a death occurs, surviving astronauts are tasked with holding a simple ceremony to honor the fallen—then to move on with their mission.Uncharted RealmsSo far we’ve only covered the “easy” questions. NASA and others are still grappling with harder ones.Consider the issue of authority over a death and mortal remains. On the ISS, it’s simple: the deceased astronaut’s home country retains jurisdiction. But that clarity fades as destinations grow more distant and the voyages more diverse: What really happens on space-agency missions to the moon, or to Mars? How might rules change for commercial or multinational spaceflights—or, for that matter, the private space stations and interplanetary settlements that are envisioned by Elon Musk, Jeff Bezos and other tech multibillionaires?NASA and its partners have started drafting frameworks, like the Artemis Accords—agreements signed by more than 50 nations to govern behavior in space. But even those don’t address many intimate details of death.What happens, for instance, if foul play is suspected?The Outer Space Treaty, a legal document drafted in 1967 under the United Nations that is humanity’s foundational set of rules for orbit and beyond, doesn’t say.Of course, not everything can be planned for in advance. And NASA has done an extraordinary job of keeping astronauts in orbit alive. But as more people venture into space, and as the frontier stretches to longer voyages and farther destinations, it becomes a statistical certainty that sooner or later someone won’t come home.When that happens, it won’t just be a tragedy. It will be a test. A test of our systems, our ethics and our ability to adapt to a new dimension of mortality. To some, NASA’s preparations for astronautical death may seem merely morbid, even silly—but that couldn’t be further from the truth.Space won’t care of course, whenever it claims more lives. But we will. And rising to that grim occasion with reverence, rigor and grace will define not just policy out in the great beyond—but what it means to be human there, too.
    Like
    Love
    Wow
    Angry
    Sad
    179
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Top Artificial Intelligence AI Books to Read in 2025

    Artificial Intelligencehas been making significant strides over the past few years, with the emergence of Large Language Modelsmarking a major milestone in its growth. With such widespread adoption, feeling left out of this revolution is not uncommon. One way an individual can stay updated with the latest trends is by reading books on various facets of AI. Following are the top AI books one should read in 2025.
    Deep LearningThis book covers a wide range of deep learning topics along with their mathematical and conceptual background. It also provides information on the different deep learning techniques used in various industrial applications.
    Python: Advanced Guide to Artificial Intelligence
    This book helps individuals familiarize themselves with the most popular machine learningalgorithms and delves into the details of deep learning, covering topics like CNN, RNN, etc. It provides a comprehensive understanding of advanced AI concepts while focusing on their practical implementation using Python.
    Machine Learningfor Dummies
    This book explains the fundamentals of machine learning by providing practical examples using Python and R. It is a beginner-friendly guide and a good starting point for people new to this field.

    Machine Learning for Beginners
    Given the pace with which machine learning systems are growing, this book provides a good base for anyone shifting to this field. The author talks about machine intelligence’s historical background and provides beginners with information on how advanced algorithms work.
    Artificial Intelligence: A Modern Approach
    This is a well-acclaimed book that covers the breadth of AI topics, including problem-solving, knowledge representation, machine learning, and natural language processing. It provides theoretical explanations along with practical examples, making it an excellent starting point for anyone looking to dive into the world of AI.
    Human Compatible: Artificial Intelligence and the Problem of Control
    The book discusses the inevitable conflict between humans and machines, providing important context before we advocate for AI. The author also talks about the possibility of superhuman AI and questions the concepts of human comprehension and machine learning.
    The Alignment Problem: Machine Learning and Human Values
    This book talks about a concept called “The Alignment Problem,” where the systems we aim to teach, don’t perform as expected, and various ethical and existential risks emerge.
    Life 3.0: Being Human in the Age of Artificial Intelligence
    The author of this book talks about questions like what the future of AI will look like and the possibility of superhuman intelligence becoming our master. He also talks about how we can ensure these systems perform without malfunctioning.
    The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma
    This book warns about the risks that emerging technologies pose to global order. It covers topics like robotics and large language models and examines the forces that fuel these innovations.
    Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning
    “Artificial Intelligence Engines” dives into the mathematical foundations of deep learning. It provides a holistic understanding of deep learning, covering both the historical development of neural networks as well as modern techniques and architecture while focusing on the underlying mathematical concepts.
    Neural Networks and Deep Learning
    This book covers the fundamental concepts of neural networks and deep learning. It also covers the mathematical aspects of the same, covering topics like linear algebra, probability theory, and numerical computation.
    Artificial Intelligence for Humans
    This book explains how AI algorithms are used using actual numeric calculations. The book aims to target those without an extensive mathematical background and each unit is followed by examples in different programming languages.
    AI Superpowers: China, Silicon Valley, and the New World Order
    The author of this book explains the unexpected consequences of AI development. The book sheds light on the competition between the USA and China over AI innovations through actual events.
    Hello World: Being Human in the Age of Algorithms
    The author talks about the powers and limitations of the algorithms that are widely used today. The book prepares its readers for the moral uncertainties of a world run by code.
    The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
    This book talks about the concept of the “Master algorithm,” which is a single, overarching learning algorithm capable of incorporating different approaches.
    Applied Artificial Intelligence: A Handbook for Business Leaders
    “Applied Artificial Intelligence” provides a guide for businesses on how to leverage AI to drive innovation and growth. It covers various applications of AI and also explores its ethical considerations. Additionally, it sheds light on building AI teams and talent acquisition. 
    Superintelligence: Paths, Dangers, Strategies
    This book asks questions like whether AI agents will save or destroy us and what happens when machines surpass humans in general intelligence. The author talks about the importance of global collaboration in developing safe AI.

    We make a small profit from purchases made via referral/affiliate links attached to each book mentioned in the above list.
    If you want to suggest any book that we missed from this list, then please email us at asif@marktechpost.com
    Shobha KakkarShobha is a data analyst with a proven track record of developing innovative machine-learning solutions that drive business value.Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/Hugging Face Introduces a Free Model Context ProtocolCourse: A Developer’s Guide to Build and Deploy Context-Aware AI Agents and ApplicationsShobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/13 Free AI Courses on AI Agents in 2025Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Announced API Access to o1Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Released Sora: The Most Awaited AI Video-Generation Tool
    #top #artificial #intelligence #books #read
    Top Artificial Intelligence AI Books to Read in 2025
    Artificial Intelligencehas been making significant strides over the past few years, with the emergence of Large Language Modelsmarking a major milestone in its growth. With such widespread adoption, feeling left out of this revolution is not uncommon. One way an individual can stay updated with the latest trends is by reading books on various facets of AI. Following are the top AI books one should read in 2025. Deep LearningThis book covers a wide range of deep learning topics along with their mathematical and conceptual background. It also provides information on the different deep learning techniques used in various industrial applications. Python: Advanced Guide to Artificial Intelligence This book helps individuals familiarize themselves with the most popular machine learningalgorithms and delves into the details of deep learning, covering topics like CNN, RNN, etc. It provides a comprehensive understanding of advanced AI concepts while focusing on their practical implementation using Python. Machine Learningfor Dummies This book explains the fundamentals of machine learning by providing practical examples using Python and R. It is a beginner-friendly guide and a good starting point for people new to this field. Machine Learning for Beginners Given the pace with which machine learning systems are growing, this book provides a good base for anyone shifting to this field. The author talks about machine intelligence’s historical background and provides beginners with information on how advanced algorithms work. Artificial Intelligence: A Modern Approach This is a well-acclaimed book that covers the breadth of AI topics, including problem-solving, knowledge representation, machine learning, and natural language processing. It provides theoretical explanations along with practical examples, making it an excellent starting point for anyone looking to dive into the world of AI. Human Compatible: Artificial Intelligence and the Problem of Control The book discusses the inevitable conflict between humans and machines, providing important context before we advocate for AI. The author also talks about the possibility of superhuman AI and questions the concepts of human comprehension and machine learning. The Alignment Problem: Machine Learning and Human Values This book talks about a concept called “The Alignment Problem,” where the systems we aim to teach, don’t perform as expected, and various ethical and existential risks emerge. Life 3.0: Being Human in the Age of Artificial Intelligence The author of this book talks about questions like what the future of AI will look like and the possibility of superhuman intelligence becoming our master. He also talks about how we can ensure these systems perform without malfunctioning. The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma This book warns about the risks that emerging technologies pose to global order. It covers topics like robotics and large language models and examines the forces that fuel these innovations. Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning “Artificial Intelligence Engines” dives into the mathematical foundations of deep learning. It provides a holistic understanding of deep learning, covering both the historical development of neural networks as well as modern techniques and architecture while focusing on the underlying mathematical concepts. Neural Networks and Deep Learning This book covers the fundamental concepts of neural networks and deep learning. It also covers the mathematical aspects of the same, covering topics like linear algebra, probability theory, and numerical computation. Artificial Intelligence for Humans This book explains how AI algorithms are used using actual numeric calculations. The book aims to target those without an extensive mathematical background and each unit is followed by examples in different programming languages. AI Superpowers: China, Silicon Valley, and the New World Order The author of this book explains the unexpected consequences of AI development. The book sheds light on the competition between the USA and China over AI innovations through actual events. Hello World: Being Human in the Age of Algorithms The author talks about the powers and limitations of the algorithms that are widely used today. The book prepares its readers for the moral uncertainties of a world run by code. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World This book talks about the concept of the “Master algorithm,” which is a single, overarching learning algorithm capable of incorporating different approaches. Applied Artificial Intelligence: A Handbook for Business Leaders “Applied Artificial Intelligence” provides a guide for businesses on how to leverage AI to drive innovation and growth. It covers various applications of AI and also explores its ethical considerations. Additionally, it sheds light on building AI teams and talent acquisition.  Superintelligence: Paths, Dangers, Strategies This book asks questions like whether AI agents will save or destroy us and what happens when machines surpass humans in general intelligence. The author talks about the importance of global collaboration in developing safe AI. We make a small profit from purchases made via referral/affiliate links attached to each book mentioned in the above list. If you want to suggest any book that we missed from this list, then please email us at asif@marktechpost.com Shobha KakkarShobha is a data analyst with a proven track record of developing innovative machine-learning solutions that drive business value.Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/Hugging Face Introduces a Free Model Context ProtocolCourse: A Developer’s Guide to Build and Deploy Context-Aware AI Agents and ApplicationsShobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/13 Free AI Courses on AI Agents in 2025Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Announced API Access to o1Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Released Sora: The Most Awaited AI Video-Generation Tool #top #artificial #intelligence #books #read
    WWW.MARKTECHPOST.COM
    Top Artificial Intelligence AI Books to Read in 2025
    Artificial Intelligence (AI) has been making significant strides over the past few years, with the emergence of Large Language Models (LLMs) marking a major milestone in its growth. With such widespread adoption, feeling left out of this revolution is not uncommon. One way an individual can stay updated with the latest trends is by reading books on various facets of AI. Following are the top AI books one should read in 2025. Deep Learning (Adaptive Computation and Machine Learning series) This book covers a wide range of deep learning topics along with their mathematical and conceptual background. It also provides information on the different deep learning techniques used in various industrial applications. Python: Advanced Guide to Artificial Intelligence This book helps individuals familiarize themselves with the most popular machine learning (ML) algorithms and delves into the details of deep learning, covering topics like CNN, RNN, etc. It provides a comprehensive understanding of advanced AI concepts while focusing on their practical implementation using Python. Machine Learning (in Python and R) for Dummies This book explains the fundamentals of machine learning by providing practical examples using Python and R. It is a beginner-friendly guide and a good starting point for people new to this field. Machine Learning for Beginners Given the pace with which machine learning systems are growing, this book provides a good base for anyone shifting to this field. The author talks about machine intelligence’s historical background and provides beginners with information on how advanced algorithms work. Artificial Intelligence: A Modern Approach This is a well-acclaimed book that covers the breadth of AI topics, including problem-solving, knowledge representation, machine learning, and natural language processing. It provides theoretical explanations along with practical examples, making it an excellent starting point for anyone looking to dive into the world of AI. Human Compatible: Artificial Intelligence and the Problem of Control The book discusses the inevitable conflict between humans and machines, providing important context before we advocate for AI. The author also talks about the possibility of superhuman AI and questions the concepts of human comprehension and machine learning. The Alignment Problem: Machine Learning and Human Values This book talks about a concept called “The Alignment Problem,” where the systems we aim to teach, don’t perform as expected, and various ethical and existential risks emerge. Life 3.0: Being Human in the Age of Artificial Intelligence The author of this book talks about questions like what the future of AI will look like and the possibility of superhuman intelligence becoming our master. He also talks about how we can ensure these systems perform without malfunctioning. The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma This book warns about the risks that emerging technologies pose to global order. It covers topics like robotics and large language models and examines the forces that fuel these innovations. Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning “Artificial Intelligence Engines” dives into the mathematical foundations of deep learning. It provides a holistic understanding of deep learning, covering both the historical development of neural networks as well as modern techniques and architecture while focusing on the underlying mathematical concepts. Neural Networks and Deep Learning This book covers the fundamental concepts of neural networks and deep learning. It also covers the mathematical aspects of the same, covering topics like linear algebra, probability theory, and numerical computation. Artificial Intelligence for Humans This book explains how AI algorithms are used using actual numeric calculations. The book aims to target those without an extensive mathematical background and each unit is followed by examples in different programming languages. AI Superpowers: China, Silicon Valley, and the New World Order The author of this book explains the unexpected consequences of AI development. The book sheds light on the competition between the USA and China over AI innovations through actual events. Hello World: Being Human in the Age of Algorithms The author talks about the powers and limitations of the algorithms that are widely used today. The book prepares its readers for the moral uncertainties of a world run by code. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World This book talks about the concept of the “Master algorithm,” which is a single, overarching learning algorithm capable of incorporating different approaches. Applied Artificial Intelligence: A Handbook for Business Leaders “Applied Artificial Intelligence” provides a guide for businesses on how to leverage AI to drive innovation and growth. It covers various applications of AI and also explores its ethical considerations. Additionally, it sheds light on building AI teams and talent acquisition.  Superintelligence: Paths, Dangers, Strategies This book asks questions like whether AI agents will save or destroy us and what happens when machines surpass humans in general intelligence. The author talks about the importance of global collaboration in developing safe AI. We make a small profit from purchases made via referral/affiliate links attached to each book mentioned in the above list. If you want to suggest any book that we missed from this list, then please email us at asif@marktechpost.com Shobha KakkarShobha is a data analyst with a proven track record of developing innovative machine-learning solutions that drive business value.Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/Hugging Face Introduces a Free Model Context Protocol (MCP) Course: A Developer’s Guide to Build and Deploy Context-Aware AI Agents and ApplicationsShobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/13 Free AI Courses on AI Agents in 2025Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Announced API Access to o1 (Advanced Reasoning Model)Shobha Kakkarhttps://www.marktechpost.com/author/shobha-kakkar/OpenAI Just Released Sora: The Most Awaited AI Video-Generation Tool
    Like
    Love
    Wow
    Sad
    Angry
    193
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Want to lower your dementia risk? Start by stressing less

    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk.

    Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia.

    We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age.

    Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life.

    Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging.

    Social isolation and stress

    Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health.

    In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age.

    It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other.

    For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline.

    Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences.

    Stress is often missing from dementia prevention efforts

    A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement.

    What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress.

    Avoiding stressful events and difficult life circumstances is typically not an option.

    Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age.

    Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits.

    Lifestyle changes to manage stress and lessen dementia risk

    The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood.

    Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress:

    Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference.

    Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable.

    If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress.

    If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits.

    Walkable neighborhoods, lifelong learning

    A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well.

    However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health.

    Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully.

    In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier.

    Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people.

    Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State.

    Martin J. Sliwinski is a professor of human development and family studies at Penn State.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #want #lower #your #dementia #risk
    Want to lower your dementia risk? Start by stressing less
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article. #want #lower #your #dementia #risk
    WWW.FASTCOMPANY.COM
    Want to lower your dementia risk? Start by stressing less
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    12 Comentários 0 Compartilhamentos 0 Anterior
  • A Rogue Star Could Hurl Earth Into Deep Space, Study Warns

    Billions of years from now, the Sun will swell into a red giant, swallowing Mercury, Venus, and Earth. But that’s not the only way our planet could meet its demise. A new simulation points to the menacing threat of a passing field star that could cause the planets in the solar system to collide or fling Earth far from the Sun. When attempting to model the evolution of the solar system, astronomers have often treated our host star and its orbiting planets as an isolated system. In reality, however, the Milky Way is teeming with stars that may get too close and threaten the stability of the solar system. A new study, published in the journal Icarus, suggests that stars passing close to the solar system will likely influence the orbits of the planets, causing another planet to smack into Earth or send our home planet flying. In most cases, passing stars are inconsequential, but one could trigger chaos in the solar system—mainly because of a single planet. The closest planet to the Sun, Mercury, is prone to instability as its orbit can become more elliptical. Astronomers believe that this increasing eccentricity could destabilize Mercury’s orbit, potentially leading it to collide with Venus or the Sun. If a star happens to be nearby, it would only make things worse.

    The researchers ran 2,000 simulations using NASA’s Horizons System, a tool from the Solar System Dynamics Group that precisely tracks the positions of objects in our solar system. They then inserted scenarios involving passing stars and found that stellar flybys over the next 5 billion years could make the solar system about 50% less stable. With passing stars, Pluto has a 3.9% chance of being ejected from the solar system, while Mercury and Mars are the two planets most often lost after a stellar flyby. Earth’s instability rate is lower, but it has a higher chance of its orbit becoming unstable if another planet crashes into it. “In addition, we find that the nature of stellar-driven instabilities is more violent than internally driven ones,” the researchers wrote in the paper. “The loss of multiple planets in stellar-driven instabilities is common and occurs about 50% of the time, whereas it appears quite rare for internally driven instabilities.” The probability of Earth’s orbit becoming unstable is hundreds of times larger than prior estimates, according to the study. Well, that just gives us one more thing to worry about.
    #rogue #star #could #hurl #earth
    A Rogue Star Could Hurl Earth Into Deep Space, Study Warns
    Billions of years from now, the Sun will swell into a red giant, swallowing Mercury, Venus, and Earth. But that’s not the only way our planet could meet its demise. A new simulation points to the menacing threat of a passing field star that could cause the planets in the solar system to collide or fling Earth far from the Sun. When attempting to model the evolution of the solar system, astronomers have often treated our host star and its orbiting planets as an isolated system. In reality, however, the Milky Way is teeming with stars that may get too close and threaten the stability of the solar system. A new study, published in the journal Icarus, suggests that stars passing close to the solar system will likely influence the orbits of the planets, causing another planet to smack into Earth or send our home planet flying. In most cases, passing stars are inconsequential, but one could trigger chaos in the solar system—mainly because of a single planet. The closest planet to the Sun, Mercury, is prone to instability as its orbit can become more elliptical. Astronomers believe that this increasing eccentricity could destabilize Mercury’s orbit, potentially leading it to collide with Venus or the Sun. If a star happens to be nearby, it would only make things worse. The researchers ran 2,000 simulations using NASA’s Horizons System, a tool from the Solar System Dynamics Group that precisely tracks the positions of objects in our solar system. They then inserted scenarios involving passing stars and found that stellar flybys over the next 5 billion years could make the solar system about 50% less stable. With passing stars, Pluto has a 3.9% chance of being ejected from the solar system, while Mercury and Mars are the two planets most often lost after a stellar flyby. Earth’s instability rate is lower, but it has a higher chance of its orbit becoming unstable if another planet crashes into it. “In addition, we find that the nature of stellar-driven instabilities is more violent than internally driven ones,” the researchers wrote in the paper. “The loss of multiple planets in stellar-driven instabilities is common and occurs about 50% of the time, whereas it appears quite rare for internally driven instabilities.” The probability of Earth’s orbit becoming unstable is hundreds of times larger than prior estimates, according to the study. Well, that just gives us one more thing to worry about. #rogue #star #could #hurl #earth
    GIZMODO.COM
    A Rogue Star Could Hurl Earth Into Deep Space, Study Warns
    Billions of years from now, the Sun will swell into a red giant, swallowing Mercury, Venus, and Earth. But that’s not the only way our planet could meet its demise. A new simulation points to the menacing threat of a passing field star that could cause the planets in the solar system to collide or fling Earth far from the Sun. When attempting to model the evolution of the solar system, astronomers have often treated our host star and its orbiting planets as an isolated system. In reality, however, the Milky Way is teeming with stars that may get too close and threaten the stability of the solar system. A new study, published in the journal Icarus, suggests that stars passing close to the solar system will likely influence the orbits of the planets, causing another planet to smack into Earth or send our home planet flying. In most cases, passing stars are inconsequential, but one could trigger chaos in the solar system—mainly because of a single planet. The closest planet to the Sun, Mercury, is prone to instability as its orbit can become more elliptical. Astronomers believe that this increasing eccentricity could destabilize Mercury’s orbit, potentially leading it to collide with Venus or the Sun. If a star happens to be nearby, it would only make things worse. The researchers ran 2,000 simulations using NASA’s Horizons System, a tool from the Solar System Dynamics Group that precisely tracks the positions of objects in our solar system. They then inserted scenarios involving passing stars and found that stellar flybys over the next 5 billion years could make the solar system about 50% less stable. With passing stars, Pluto has a 3.9% chance of being ejected from the solar system, while Mercury and Mars are the two planets most often lost after a stellar flyby. Earth’s instability rate is lower, but it has a higher chance of its orbit becoming unstable if another planet crashes into it. “In addition, we find that the nature of stellar-driven instabilities is more violent than internally driven ones,” the researchers wrote in the paper. “The loss of multiple planets in stellar-driven instabilities is common and occurs about 50% of the time, whereas it appears quite rare for internally driven instabilities.” The probability of Earth’s orbit becoming unstable is hundreds of times larger than prior estimates, according to the study. Well, that just gives us one more thing to worry about.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • AI is rotting your brain and making you stupid

    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up.

    Titled 'Common People', the episode is from series 7 of Black MirrorNetflix

    The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car?

    What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos

    Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.

    LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos

    ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber.

    “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

    Wrote Plato, quoting Socrates

    Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel.

    Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain

    I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop.

    “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”

    Ted Chiang
    #rotting #your #brain #making #you
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang #rotting #your #brain #making #you
    NEWATLAS.COM
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AI (or, as we’ll discuss later, language learning models that pretend to look and sound like an artificial intelligence) I’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs (large language models). At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interested (or concerned) with the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggest (according to his student Plato, who did write things down) that writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang
    0 Comentários 0 Compartilhamentos 0 Anterior
  • 3 Ways ‘Game Theory’ Could Benefit You At Work, By A Psychologist

    From office politics to salary negotiations, treating work like a strategy game can give you a ... More real-world edge. But should you?getty
    I recently had a revealing conversation with a friend — a game developer — who admitted, almost sheepishly, that while he was fluent in the mechanics of game theory, he rarely applied it outside of code. That got me thinking.

    For most people, game theory lives in two corners of life: economics classrooms and video games. It’s a phrase that evokes images of Cold War negotiations or player-versus-player showdowns. And to their credit, that’s grounded.

    At its core, game theory studies how people make decisions when outcomes hinge not just on their choices, but on others’ choices too. Originally a mathematical model developed to analyze strategic interactions, it’s now applied to everything from dating apps to corporate strategy.

    But in real life, nobody is perfectly rational. We don’t just calculate; we feel, too. That’s where the brain kicks in.

    According to the “Expected Value of Control” framework from cognitive neuroscience, we calibrate our effort by asking two questions:

    How big is the reward?
    How much control do I have in getting it?

    When both answers are high, motivation spikes. When either drops, we disengage. Research shows this pattern in real time — the brain works harder when success feels attainable.
    Play Puzzles & Games on Forbes
    This mirrors game theory’s central question: not just what the outcomes are, but whether it’s worth trying at all. Using a game theory lens in a professional setting, then, can be messy and sometimes bring unwanted emotional repercussions. The saving grace, however, is that it’s somewhat intuitively patterned and, arguably, predictable.
    So should you actually apply game theory to your professional life? Yes, but not as gospel, and not all the time. Being too focused on identifying, labeling and trying to “win” every interaction can backfire.

    It can make you seem cold and calculating, even when you’re not, and it can open the door to misunderstandings or quiet resentment. Put simply, it’s important to be aware of how your choices affect others and how theirs affect yours, but it’s also dangerously easy for that awareness to tip over into an unproductive state of hyperawareness.
    Game theory is a legitimately powerful lens — but like any lens, it should be used sparingly and with the right intentions. Pick your battles, and if you’re curious how to apply it in your own career, start with clarity, empathy and a telescope and compass. Use these not to dominate the game, but to understand it and play it to the best of your abilities, so everyone wins.
    1. Establish Competence For Yourself And Assume It From Others
    There’s a popular saying in hustle culture: work smarter, not harder. At first glance, it makes sense — but in elite professional environments, it’s a rather reductive and presumptuous approach.
    The phrase can carry the implication that others aren’t working smart or that they aren’t capable of working smart. But in high-performing teams, where stakes are real and decisions have impact, most people are smart. Most are optimizers. And that means “working smart” will only take you so far before everyone’s doing the same. After that, the only edge left is consistent, high-quality production — what we generalize as hard work.
    From a game theory lens, this type of hard work essentially increases your odds. Overdelivering, consistently and visibly, skews the probability curve in your favor. You either become impossible to ignore, or highly valuable. Ideally, aim for both.
    And here’s where the real move comes in: assume the same of others. In most multiplayer games, especially online ones, expecting competence from your opponents forces you to play better. It raises the floor of your expectations, improves collaboration and protects you from the trap of underestimating the consequences of your actions.
    Take chess, for example. In a large study of tournament players, researchers found that serious solo study was the strongest predictor of performance, even more than formal coaching or tournament experience.
    Grandmasters, on average, had put in nearly 5,000 hours of deliberate study in their first decade of serious play. This is about five times more than intermediate players. This is why in a game of chess between one grandmaster and another, neither player underestimates the other.
    2. Exploit The Parts Of Work That Don’t Feel Like Work To You
    My friend told me he rarely applies game theory outside of code. But the more he talked about his work, the more obvious it became that the man lives it. He’s been into video games since he was a child, and now, as an adult, he gets paid to build what he used to dream about.
    Sure, he has deadlines, targets and a minimum number of hours to log every week — but to him, those are just constraints on paper. What actually drives him is the intuitive thrill of creation. Everything else is background noise that requires calibration, not deference.
    This is where game theory can intersect with psychology in an actionable way. If you can identify aspects of your work that you uniquely enjoy — and that others may see as tedious, difficult or draining — you may have found an edge. Because in competitive environments, advantage is often about doing the same amount with less psychological cost.
    In game theory terms, you’re exploiting an asymmetric payoff structure, where your internal reward is higher than that of your peers for the same action. When others see effort, you feel flow. That makes you highly resilient and harder to outlast.
    It’s also how you avoid falling into the trap of accepting a Nash equilibrium. This is a state where each person settles on a strategy that feels rational given everyone else’s, even if the group as a whole is stuck in mediocrity. No one deviates, because no one has an incentive to, unless someone changes the underlying payoff structure.
    For example, imagine a team project where everyone quietly agrees to put in just enough effort to get by, no more, no less. It feels fair, and no one wants to overextend. But if even one person realizes they could stand to gain by going above that baseline, they have an incentive to break the agreement. The moment they do, the equilibrium collapses, because now others are pressured to step up or risk falling behind.
    In a true equilibrium, each person’s strategy is the best possible response to what everyone else is doing. No one gains by changing course. However, when your internal motivation shifts the reward equation, you may begin to question the basis of the equilibrium itself.
    Be aware, in any case, that this is a tricky situation to navigate, especially if we contextualize this from the point of view of the stereotypical kid in class who reminds their teacher about homework. Even if the child acts in earnest, they may unintentionally invite isolation both from their peers and, sometimes, from the teachers themselves.
    This is why the advice to “follow your passion” often misfires. Unless there’s a clear definition of what constitutes passion, the advice lands as too vague. A more precise version is this: find and hone a valuable skill that energizes you, but might drain most others.
    3. Follow The Money Only Far Enough To Find The Game
    There’s a certain kind of professional who doesn’t chase money for money’s sake. Maybe he writes code for a game studio as a day job, writes blogs on the side and even mentors high school kids on their computer science projects. But this isn’t so much about padding his lifestyle or building a mountain of cash.
    What he’s really doing is looking for games: intellectually engaging challenges, satisfying loops and rewarding feedback. In a sense, he’s always gaming, not because he’s avoiding work, but because he’s designed his life around what feels like play. This mindset flips the usual money narrative on its head.
    And ironically, that’s often what leads to sustainable financial success: finding personal fulfillment that makes consistent effort easier for you and everyone around you.
    In game theory, this is a self-reinforcing loop: the more the game rewards you internally, the less you need external motivation to keep showing up.
    So instead of asking, “What’s the highest-paying path?” — ask, “Which games would I play even if I didn’t have to?” Then, work backward to find ways to monetize them. This does two incredibly valuable things in tandem: It respects the system you’re in, and it respects the goals you personally hold dear.
    While game theory maps workplace social behavior reasonably well, constantly remaining in a heightened state of awareness can backfire. Take the Self-Awareness Outcomes Questionnaire to better understand if yours is a blessing or a curse.
    #ways #game #theory #could #benefit
    3 Ways ‘Game Theory’ Could Benefit You At Work, By A Psychologist
    From office politics to salary negotiations, treating work like a strategy game can give you a ... More real-world edge. But should you?getty I recently had a revealing conversation with a friend — a game developer — who admitted, almost sheepishly, that while he was fluent in the mechanics of game theory, he rarely applied it outside of code. That got me thinking. For most people, game theory lives in two corners of life: economics classrooms and video games. It’s a phrase that evokes images of Cold War negotiations or player-versus-player showdowns. And to their credit, that’s grounded. At its core, game theory studies how people make decisions when outcomes hinge not just on their choices, but on others’ choices too. Originally a mathematical model developed to analyze strategic interactions, it’s now applied to everything from dating apps to corporate strategy. But in real life, nobody is perfectly rational. We don’t just calculate; we feel, too. That’s where the brain kicks in. According to the “Expected Value of Control” framework from cognitive neuroscience, we calibrate our effort by asking two questions: How big is the reward? How much control do I have in getting it? When both answers are high, motivation spikes. When either drops, we disengage. Research shows this pattern in real time — the brain works harder when success feels attainable. Play Puzzles & Games on Forbes This mirrors game theory’s central question: not just what the outcomes are, but whether it’s worth trying at all. Using a game theory lens in a professional setting, then, can be messy and sometimes bring unwanted emotional repercussions. The saving grace, however, is that it’s somewhat intuitively patterned and, arguably, predictable. So should you actually apply game theory to your professional life? Yes, but not as gospel, and not all the time. Being too focused on identifying, labeling and trying to “win” every interaction can backfire. It can make you seem cold and calculating, even when you’re not, and it can open the door to misunderstandings or quiet resentment. Put simply, it’s important to be aware of how your choices affect others and how theirs affect yours, but it’s also dangerously easy for that awareness to tip over into an unproductive state of hyperawareness. Game theory is a legitimately powerful lens — but like any lens, it should be used sparingly and with the right intentions. Pick your battles, and if you’re curious how to apply it in your own career, start with clarity, empathy and a telescope and compass. Use these not to dominate the game, but to understand it and play it to the best of your abilities, so everyone wins. 1. Establish Competence For Yourself And Assume It From Others There’s a popular saying in hustle culture: work smarter, not harder. At first glance, it makes sense — but in elite professional environments, it’s a rather reductive and presumptuous approach. The phrase can carry the implication that others aren’t working smart or that they aren’t capable of working smart. But in high-performing teams, where stakes are real and decisions have impact, most people are smart. Most are optimizers. And that means “working smart” will only take you so far before everyone’s doing the same. After that, the only edge left is consistent, high-quality production — what we generalize as hard work. From a game theory lens, this type of hard work essentially increases your odds. Overdelivering, consistently and visibly, skews the probability curve in your favor. You either become impossible to ignore, or highly valuable. Ideally, aim for both. And here’s where the real move comes in: assume the same of others. In most multiplayer games, especially online ones, expecting competence from your opponents forces you to play better. It raises the floor of your expectations, improves collaboration and protects you from the trap of underestimating the consequences of your actions. Take chess, for example. In a large study of tournament players, researchers found that serious solo study was the strongest predictor of performance, even more than formal coaching or tournament experience. Grandmasters, on average, had put in nearly 5,000 hours of deliberate study in their first decade of serious play. This is about five times more than intermediate players. This is why in a game of chess between one grandmaster and another, neither player underestimates the other. 2. Exploit The Parts Of Work That Don’t Feel Like Work To You My friend told me he rarely applies game theory outside of code. But the more he talked about his work, the more obvious it became that the man lives it. He’s been into video games since he was a child, and now, as an adult, he gets paid to build what he used to dream about. Sure, he has deadlines, targets and a minimum number of hours to log every week — but to him, those are just constraints on paper. What actually drives him is the intuitive thrill of creation. Everything else is background noise that requires calibration, not deference. This is where game theory can intersect with psychology in an actionable way. If you can identify aspects of your work that you uniquely enjoy — and that others may see as tedious, difficult or draining — you may have found an edge. Because in competitive environments, advantage is often about doing the same amount with less psychological cost. In game theory terms, you’re exploiting an asymmetric payoff structure, where your internal reward is higher than that of your peers for the same action. When others see effort, you feel flow. That makes you highly resilient and harder to outlast. It’s also how you avoid falling into the trap of accepting a Nash equilibrium. This is a state where each person settles on a strategy that feels rational given everyone else’s, even if the group as a whole is stuck in mediocrity. No one deviates, because no one has an incentive to, unless someone changes the underlying payoff structure. For example, imagine a team project where everyone quietly agrees to put in just enough effort to get by, no more, no less. It feels fair, and no one wants to overextend. But if even one person realizes they could stand to gain by going above that baseline, they have an incentive to break the agreement. The moment they do, the equilibrium collapses, because now others are pressured to step up or risk falling behind. In a true equilibrium, each person’s strategy is the best possible response to what everyone else is doing. No one gains by changing course. However, when your internal motivation shifts the reward equation, you may begin to question the basis of the equilibrium itself. Be aware, in any case, that this is a tricky situation to navigate, especially if we contextualize this from the point of view of the stereotypical kid in class who reminds their teacher about homework. Even if the child acts in earnest, they may unintentionally invite isolation both from their peers and, sometimes, from the teachers themselves. This is why the advice to “follow your passion” often misfires. Unless there’s a clear definition of what constitutes passion, the advice lands as too vague. A more precise version is this: find and hone a valuable skill that energizes you, but might drain most others. 3. Follow The Money Only Far Enough To Find The Game There’s a certain kind of professional who doesn’t chase money for money’s sake. Maybe he writes code for a game studio as a day job, writes blogs on the side and even mentors high school kids on their computer science projects. But this isn’t so much about padding his lifestyle or building a mountain of cash. What he’s really doing is looking for games: intellectually engaging challenges, satisfying loops and rewarding feedback. In a sense, he’s always gaming, not because he’s avoiding work, but because he’s designed his life around what feels like play. This mindset flips the usual money narrative on its head. And ironically, that’s often what leads to sustainable financial success: finding personal fulfillment that makes consistent effort easier for you and everyone around you. In game theory, this is a self-reinforcing loop: the more the game rewards you internally, the less you need external motivation to keep showing up. So instead of asking, “What’s the highest-paying path?” — ask, “Which games would I play even if I didn’t have to?” Then, work backward to find ways to monetize them. This does two incredibly valuable things in tandem: It respects the system you’re in, and it respects the goals you personally hold dear. While game theory maps workplace social behavior reasonably well, constantly remaining in a heightened state of awareness can backfire. Take the Self-Awareness Outcomes Questionnaire to better understand if yours is a blessing or a curse. #ways #game #theory #could #benefit
    WWW.FORBES.COM
    3 Ways ‘Game Theory’ Could Benefit You At Work, By A Psychologist
    From office politics to salary negotiations, treating work like a strategy game can give you a ... More real-world edge. But should you?getty I recently had a revealing conversation with a friend — a game developer — who admitted, almost sheepishly, that while he was fluent in the mechanics of game theory, he rarely applied it outside of code. That got me thinking. For most people, game theory lives in two corners of life: economics classrooms and video games. It’s a phrase that evokes images of Cold War negotiations or player-versus-player showdowns. And to their credit, that’s grounded. At its core, game theory studies how people make decisions when outcomes hinge not just on their choices, but on others’ choices too. Originally a mathematical model developed to analyze strategic interactions, it’s now applied to everything from dating apps to corporate strategy. But in real life, nobody is perfectly rational. We don’t just calculate; we feel, too. That’s where the brain kicks in. According to the “Expected Value of Control” framework from cognitive neuroscience, we calibrate our effort by asking two questions: How big is the reward? How much control do I have in getting it? When both answers are high, motivation spikes. When either drops, we disengage. Research shows this pattern in real time — the brain works harder when success feels attainable. Play Puzzles & Games on Forbes This mirrors game theory’s central question: not just what the outcomes are, but whether it’s worth trying at all. Using a game theory lens in a professional setting, then, can be messy and sometimes bring unwanted emotional repercussions. The saving grace, however, is that it’s somewhat intuitively patterned and, arguably, predictable. So should you actually apply game theory to your professional life? Yes, but not as gospel, and not all the time. Being too focused on identifying, labeling and trying to “win” every interaction can backfire. It can make you seem cold and calculating, even when you’re not, and it can open the door to misunderstandings or quiet resentment. Put simply, it’s important to be aware of how your choices affect others and how theirs affect yours, but it’s also dangerously easy for that awareness to tip over into an unproductive state of hyperawareness. Game theory is a legitimately powerful lens — but like any lens, it should be used sparingly and with the right intentions. Pick your battles, and if you’re curious how to apply it in your own career, start with clarity, empathy and a telescope and compass. Use these not to dominate the game, but to understand it and play it to the best of your abilities, so everyone wins. 1. Establish Competence For Yourself And Assume It From Others There’s a popular saying in hustle culture: work smarter, not harder. At first glance, it makes sense — but in elite professional environments, it’s a rather reductive and presumptuous approach. The phrase can carry the implication that others aren’t working smart or that they aren’t capable of working smart. But in high-performing teams, where stakes are real and decisions have impact, most people are smart. Most are optimizers. And that means “working smart” will only take you so far before everyone’s doing the same. After that, the only edge left is consistent, high-quality production — what we generalize as hard work. From a game theory lens, this type of hard work essentially increases your odds. Overdelivering, consistently and visibly, skews the probability curve in your favor. You either become impossible to ignore, or highly valuable. Ideally, aim for both. And here’s where the real move comes in: assume the same of others. In most multiplayer games, especially online ones, expecting competence from your opponents forces you to play better. It raises the floor of your expectations, improves collaboration and protects you from the trap of underestimating the consequences of your actions. Take chess, for example. In a large study of tournament players, researchers found that serious solo study was the strongest predictor of performance, even more than formal coaching or tournament experience. Grandmasters, on average, had put in nearly 5,000 hours of deliberate study in their first decade of serious play. This is about five times more than intermediate players. This is why in a game of chess between one grandmaster and another, neither player underestimates the other. 2. Exploit The Parts Of Work That Don’t Feel Like Work To You My friend told me he rarely applies game theory outside of code. But the more he talked about his work, the more obvious it became that the man lives it. He’s been into video games since he was a child, and now, as an adult, he gets paid to build what he used to dream about. Sure, he has deadlines, targets and a minimum number of hours to log every week — but to him, those are just constraints on paper. What actually drives him is the intuitive thrill of creation. Everything else is background noise that requires calibration, not deference. This is where game theory can intersect with psychology in an actionable way. If you can identify aspects of your work that you uniquely enjoy — and that others may see as tedious, difficult or draining — you may have found an edge. Because in competitive environments, advantage is often about doing the same amount with less psychological cost. In game theory terms, you’re exploiting an asymmetric payoff structure, where your internal reward is higher than that of your peers for the same action. When others see effort, you feel flow. That makes you highly resilient and harder to outlast. It’s also how you avoid falling into the trap of accepting a Nash equilibrium. This is a state where each person settles on a strategy that feels rational given everyone else’s, even if the group as a whole is stuck in mediocrity. No one deviates, because no one has an incentive to, unless someone changes the underlying payoff structure. For example, imagine a team project where everyone quietly agrees to put in just enough effort to get by, no more, no less. It feels fair, and no one wants to overextend. But if even one person realizes they could stand to gain by going above that baseline, they have an incentive to break the agreement. The moment they do, the equilibrium collapses, because now others are pressured to step up or risk falling behind. In a true equilibrium, each person’s strategy is the best possible response to what everyone else is doing. No one gains by changing course. However, when your internal motivation shifts the reward equation, you may begin to question the basis of the equilibrium itself. Be aware, in any case, that this is a tricky situation to navigate, especially if we contextualize this from the point of view of the stereotypical kid in class who reminds their teacher about homework. Even if the child acts in earnest, they may unintentionally invite isolation both from their peers and, sometimes, from the teachers themselves. This is why the advice to “follow your passion” often misfires. Unless there’s a clear definition of what constitutes passion, the advice lands as too vague. A more precise version is this: find and hone a valuable skill that energizes you, but might drain most others. 3. Follow The Money Only Far Enough To Find The Game There’s a certain kind of professional who doesn’t chase money for money’s sake. Maybe he writes code for a game studio as a day job, writes blogs on the side and even mentors high school kids on their computer science projects. But this isn’t so much about padding his lifestyle or building a mountain of cash. What he’s really doing is looking for games: intellectually engaging challenges, satisfying loops and rewarding feedback. In a sense, he’s always gaming, not because he’s avoiding work, but because he’s designed his life around what feels like play. This mindset flips the usual money narrative on its head. And ironically, that’s often what leads to sustainable financial success: finding personal fulfillment that makes consistent effort easier for you and everyone around you. In game theory, this is a self-reinforcing loop: the more the game rewards you internally, the less you need external motivation to keep showing up. So instead of asking, “What’s the highest-paying path?” — ask, “Which games would I play even if I didn’t have to?” Then, work backward to find ways to monetize them. This does two incredibly valuable things in tandem: It respects the system you’re in, and it respects the goals you personally hold dear. While game theory maps workplace social behavior reasonably well, constantly remaining in a heightened state of awareness can backfire. Take the Self-Awareness Outcomes Questionnaire to better understand if yours is a blessing or a curse.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com