• We’re secretly winning the war on cancer

    On November 4, 2003, a doctor gave Jon Gluck some of the worst news imaginable: He had cancer — one that later tests would reveal as multiple myeloma, a severe blood and bone marrow cancer. Jon was told he might have as little as 18 months to live. He was 38, a thriving magazine editor in New York with a 7-month-old daughter whose third birthday, he suddenly realized, he might never see.“The moment after I was told I had cancer, I just said ‘no, no, no,’” Jon told me in an interview just last week. “This cannot be true.”Living in remissionThe fact that Jon is still here, talking to me in 2025, tells you that things didn’t go the way the medical data would have predicted on that November morning. He has lived with his cancer, through waves of remission and recurrence, for more than 20 years, an experience he chronicles with grace and wit in his new book An Exercise in Uncertainty. That 7-month-old daughter is now in college.RelatedWhy do so many young people suddenly have cancer?You could say Jon has beaten the odds, and he’s well aware that chance played some role in his survival.Cancer is still a terrible health threat, one that is responsible for 1 in 6 deaths around the world, killing nearly 10 million people a year globally and over 600,000 people a year in the US. But Jon’s story and his survival demonstrate something that is too often missed: We’ve turned the tide in the war against cancer. The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago. That adds up to over 4 million fewer cancer deaths over that time period. Thanks to breakthroughs in treatments like autologous stem-cell harvesting and CAR-T therapy — breakthroughs Jon himself benefited from, often just in time — cancer isn’t the death sentence it once was.Our World in DataGetting better all the timeThere’s no doubt that just as the rise of smoking in the 20th century led to a major increase in cancer deaths, the equally sharp decline of tobacco use eventually led to a delayed decrease. Smoking is one of the most potent carcinogens in the world, and at the peak in the early 1960s, around 12 cigarettes were being sold per adult per day in the US. Take away the cigarettes and — after a delay of a couple of decades — lung cancer deaths drop in turn along with other non-cancer smoking-related deaths.But as Saloni Dattani wrote in a great piece earlier this year, even before the decline of smoking, death rates from non-lung cancers in the stomach and colon had begun to fall. Just as notably, death rates for childhood cancers — which for obvious reasons are not connected to smoking and tend to be caused by genetic mutations — have fallen significantly as well, declining sixfold since 1950. In the 1960s, for example, only around 10 percent of children diagnosed with acute lymphoblastic leukemia survived more than five years. Today it’s more than 90 percent. And the five-year survival rate for all cancers has risen from 49 percent in the mid-1970s to 69 percent in 2019. We’ve made strikes against the toughest of cancers, like Jon’s multiple myeloma. Around when Jon was diagnosed, the five-year survival rate was just 34 percent. Today it’s as high as 62 percent, and more and more people like Jon are living for decades. “There has been a revolution in cancer survival,” Jon told me. “Some illnesses now have far more successful therapies than others, but the gains are real.”Three cancer revolutions The dramatic bend in the curve of cancer deaths didn’t happen by accident — it’s the compound interest of three revolutions.While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people’s cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20–39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence. The next revolution is better and earlier screening. It’s generally true that the earlier cancer is caught, the better the chances of survival, as Jon’s own story shows. According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.Most exciting of all are frontier developments in treating cancer, much of which can be tracked through Jon’s own experience. From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people’s lives — not just by months, but years.Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient’s own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see. “CAR-T is mind-blowing — very science-fiction futuristic,” Jon told me. He underwent his own course of treatment with it in mid-2023 and writes that the experience, which put his cancer into a remission he’s still in, left him feeling “physically and metaphysically new.”A welcome uncertaintyWhile there are still more battles to be won in the war on cancer, and there are certain areas — like the rising rates of gastrointestinal cancers among younger people — where the story isn’t getting better, the future of cancer treatment is improving. For cancer patients like Jon, that can mean a new challenge — enduring the essential uncertainty that comes with living under a disease that’s controllable but which could always come back. But it sure beats the alternative.“I’ve come to trust so completely in my doctors and in these new developments,” he said. “I try to remain cautiously optimistic that my future will be much like the last 20 years.” And that’s more than he or anyone else could have hoped for nearly 22 years ago. A version of this story originally appeared in the Good News newsletter. Sign up here!See More: Health
    #weampamp8217re #secretly #winning #war #cancer
    We’re secretly winning the war on cancer
    On November 4, 2003, a doctor gave Jon Gluck some of the worst news imaginable: He had cancer — one that later tests would reveal as multiple myeloma, a severe blood and bone marrow cancer. Jon was told he might have as little as 18 months to live. He was 38, a thriving magazine editor in New York with a 7-month-old daughter whose third birthday, he suddenly realized, he might never see.“The moment after I was told I had cancer, I just said ‘no, no, no,’” Jon told me in an interview just last week. “This cannot be true.”Living in remissionThe fact that Jon is still here, talking to me in 2025, tells you that things didn’t go the way the medical data would have predicted on that November morning. He has lived with his cancer, through waves of remission and recurrence, for more than 20 years, an experience he chronicles with grace and wit in his new book An Exercise in Uncertainty. That 7-month-old daughter is now in college.RelatedWhy do so many young people suddenly have cancer?You could say Jon has beaten the odds, and he’s well aware that chance played some role in his survival.Cancer is still a terrible health threat, one that is responsible for 1 in 6 deaths around the world, killing nearly 10 million people a year globally and over 600,000 people a year in the US. But Jon’s story and his survival demonstrate something that is too often missed: We’ve turned the tide in the war against cancer. The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago. That adds up to over 4 million fewer cancer deaths over that time period. Thanks to breakthroughs in treatments like autologous stem-cell harvesting and CAR-T therapy — breakthroughs Jon himself benefited from, often just in time — cancer isn’t the death sentence it once was.Our World in DataGetting better all the timeThere’s no doubt that just as the rise of smoking in the 20th century led to a major increase in cancer deaths, the equally sharp decline of tobacco use eventually led to a delayed decrease. Smoking is one of the most potent carcinogens in the world, and at the peak in the early 1960s, around 12 cigarettes were being sold per adult per day in the US. Take away the cigarettes and — after a delay of a couple of decades — lung cancer deaths drop in turn along with other non-cancer smoking-related deaths.But as Saloni Dattani wrote in a great piece earlier this year, even before the decline of smoking, death rates from non-lung cancers in the stomach and colon had begun to fall. Just as notably, death rates for childhood cancers — which for obvious reasons are not connected to smoking and tend to be caused by genetic mutations — have fallen significantly as well, declining sixfold since 1950. In the 1960s, for example, only around 10 percent of children diagnosed with acute lymphoblastic leukemia survived more than five years. Today it’s more than 90 percent. And the five-year survival rate for all cancers has risen from 49 percent in the mid-1970s to 69 percent in 2019. We’ve made strikes against the toughest of cancers, like Jon’s multiple myeloma. Around when Jon was diagnosed, the five-year survival rate was just 34 percent. Today it’s as high as 62 percent, and more and more people like Jon are living for decades. “There has been a revolution in cancer survival,” Jon told me. “Some illnesses now have far more successful therapies than others, but the gains are real.”Three cancer revolutions The dramatic bend in the curve of cancer deaths didn’t happen by accident — it’s the compound interest of three revolutions.While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people’s cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20–39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence. The next revolution is better and earlier screening. It’s generally true that the earlier cancer is caught, the better the chances of survival, as Jon’s own story shows. According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.Most exciting of all are frontier developments in treating cancer, much of which can be tracked through Jon’s own experience. From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people’s lives — not just by months, but years.Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient’s own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see. “CAR-T is mind-blowing — very science-fiction futuristic,” Jon told me. He underwent his own course of treatment with it in mid-2023 and writes that the experience, which put his cancer into a remission he’s still in, left him feeling “physically and metaphysically new.”A welcome uncertaintyWhile there are still more battles to be won in the war on cancer, and there are certain areas — like the rising rates of gastrointestinal cancers among younger people — where the story isn’t getting better, the future of cancer treatment is improving. For cancer patients like Jon, that can mean a new challenge — enduring the essential uncertainty that comes with living under a disease that’s controllable but which could always come back. But it sure beats the alternative.“I’ve come to trust so completely in my doctors and in these new developments,” he said. “I try to remain cautiously optimistic that my future will be much like the last 20 years.” And that’s more than he or anyone else could have hoped for nearly 22 years ago. A version of this story originally appeared in the Good News newsletter. Sign up here!See More: Health #weampamp8217re #secretly #winning #war #cancer
    WWW.VOX.COM
    We’re secretly winning the war on cancer
    On November 4, 2003, a doctor gave Jon Gluck some of the worst news imaginable: He had cancer — one that later tests would reveal as multiple myeloma, a severe blood and bone marrow cancer. Jon was told he might have as little as 18 months to live. He was 38, a thriving magazine editor in New York with a 7-month-old daughter whose third birthday, he suddenly realized, he might never see.“The moment after I was told I had cancer, I just said ‘no, no, no,’” Jon told me in an interview just last week. “This cannot be true.”Living in remissionThe fact that Jon is still here, talking to me in 2025, tells you that things didn’t go the way the medical data would have predicted on that November morning. He has lived with his cancer, through waves of remission and recurrence, for more than 20 years, an experience he chronicles with grace and wit in his new book An Exercise in Uncertainty. That 7-month-old daughter is now in college.RelatedWhy do so many young people suddenly have cancer?You could say Jon has beaten the odds, and he’s well aware that chance played some role in his survival. (“Did you know that ‘Glück’ is German for ‘luck’?” he writes in the book, noting his good fortune that a random spill on the ice is what sent him to the doctor in the first place, enabling them to catch his cancer early.) Cancer is still a terrible health threat, one that is responsible for 1 in 6 deaths around the world, killing nearly 10 million people a year globally and over 600,000 people a year in the US. But Jon’s story and his survival demonstrate something that is too often missed: We’ve turned the tide in the war against cancer. The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago. That adds up to over 4 million fewer cancer deaths over that time period. Thanks to breakthroughs in treatments like autologous stem-cell harvesting and CAR-T therapy — breakthroughs Jon himself benefited from, often just in time — cancer isn’t the death sentence it once was.Our World in DataGetting better all the timeThere’s no doubt that just as the rise of smoking in the 20th century led to a major increase in cancer deaths, the equally sharp decline of tobacco use eventually led to a delayed decrease. Smoking is one of the most potent carcinogens in the world, and at the peak in the early 1960s, around 12 cigarettes were being sold per adult per day in the US. Take away the cigarettes and — after a delay of a couple of decades — lung cancer deaths drop in turn along with other non-cancer smoking-related deaths.But as Saloni Dattani wrote in a great piece earlier this year, even before the decline of smoking, death rates from non-lung cancers in the stomach and colon had begun to fall. Just as notably, death rates for childhood cancers — which for obvious reasons are not connected to smoking and tend to be caused by genetic mutations — have fallen significantly as well, declining sixfold since 1950. In the 1960s, for example, only around 10 percent of children diagnosed with acute lymphoblastic leukemia survived more than five years. Today it’s more than 90 percent. And the five-year survival rate for all cancers has risen from 49 percent in the mid-1970s to 69 percent in 2019. We’ve made strikes against the toughest of cancers, like Jon’s multiple myeloma. Around when Jon was diagnosed, the five-year survival rate was just 34 percent. Today it’s as high as 62 percent, and more and more people like Jon are living for decades. “There has been a revolution in cancer survival,” Jon told me. “Some illnesses now have far more successful therapies than others, but the gains are real.”Three cancer revolutions The dramatic bend in the curve of cancer deaths didn’t happen by accident — it’s the compound interest of three revolutions.While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people’s cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20–39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence. The next revolution is better and earlier screening. It’s generally true that the earlier cancer is caught, the better the chances of survival, as Jon’s own story shows. According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.Most exciting of all are frontier developments in treating cancer, much of which can be tracked through Jon’s own experience. From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people’s lives — not just by months, but years.Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient’s own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see. “CAR-T is mind-blowing — very science-fiction futuristic,” Jon told me. He underwent his own course of treatment with it in mid-2023 and writes that the experience, which put his cancer into a remission he’s still in, left him feeling “physically and metaphysically new.”A welcome uncertaintyWhile there are still more battles to be won in the war on cancer, and there are certain areas — like the rising rates of gastrointestinal cancers among younger people — where the story isn’t getting better, the future of cancer treatment is improving. For cancer patients like Jon, that can mean a new challenge — enduring the essential uncertainty that comes with living under a disease that’s controllable but which could always come back. But it sure beats the alternative.“I’ve come to trust so completely in my doctors and in these new developments,” he said. “I try to remain cautiously optimistic that my future will be much like the last 20 years.” And that’s more than he or anyone else could have hoped for nearly 22 years ago. A version of this story originally appeared in the Good News newsletter. Sign up here!See More: Health
    Like
    Love
    Wow
    Angry
    Sad
    668
    0 Reacties 0 aandelen
  • Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs

    AGI will be embedded into humanoid robots, which makes white-collar and blue-collar jobs a target ... More for walking/talking automation.getty
    In today’s column, I examine the highly worrisome qualms expressed that the advent of artificial general intelligenceis likely to usurp white-collar jobs. The stated concern is that since AGI will be on par with human intellect, any job that relies principally on intellectual pursuits such as typical white-collar work will be taken over via the use of AGI. Employers will realize that rather than dealing with human white-collar workers, they can more readily get the job done via AGI. This, in turn, has led to a rising call that people should aim toward blue-collar jobs, doing so becausethose forms of employment will not be undercut via AGI.

    Sorry to say, that misses the bigger picture, namely that AGI when combined with humanoid robots is coming not only for white-collar jobs but also blue-collar jobs too. It is a proverbial double-whammy when it comes to the attainment of AGI.

    Let’s talk about it.

    This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities.

    Heading Toward AGI And ASI
    First, some fundamentals are required to set the stage for this weighty discussion.
    There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence.
    AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
    We have not yet attained AGI.
    In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
    AGI Problem Only Half Seen
    Before launching into the primary matter at hand in this discussion, let’s contemplate a famous quote attributed to Charles Kettering, a legendary inventor, who said, “A problem well-stated is a problem half-solved.”

    I bring this up because those loud clamors right now about the assumption that AGI will replace white-collar workers are only seeing half of the problem. The problem as they see it is that since AGI is intellectually on par with humans, and since white-collar workers mainly use intellect in their work endeavors, AGI is going to be used in place of humans for white-collar work.
    I will in a moment explain why that’s only half of the problem and there is a demonstrative need to more carefully and fully articulate the nature of the problem.
    Will AGI Axiomatically Take White-Collar Jobs
    On a related facet, the belief that AGI will axiomatically replace white-collar labor makes a number of other related key assumptions. I shall briefly explore those and then come back to why the problem itself is only half-baked.
    The cost of using AGI for doing white-collar work will need to be presumably a better ROI choice over human workers. If not, then an employer would be wiser to stick with humans rather than employing AGI. There seems to often be an unstated belief that AGI is necessarily going to be a less costly route than employing humans.
    We don’t know yet what the cost of using AGI will be.
    It could be highly expensive. Indeed, some are worried that the world will divide into the AGI haves and AGI have-nots, partially due to the exorbitant cost that AGI might involve. If AGI is free to use, well, that would seem to be the nail in the coffin related to using human workers for the same capacity. Another angle is that AGI is relatively inexpensive in comparison to human labor. In that case, the use of AGI is likely to win over human labor usage.
    But if the cost of AGI is nearer to the cost of human labor, or more so, then employers would rationally need to weigh the use of one versus the other.
    Note that when referring to the cost of human labor, there is more to that calculation than simply the dollar-hour labor rate per se. There are lots of other less apparent costs, such as the cost to manage human labor, the cost of dealing with HR-related issues, and many other factors that come into the weighty matter. Thus, an AGI versus human labor ROI will be more complex than it might seem at an initial glance. In addition, keep in mind that AGI would seemingly be readily switched on and off, and have other capacities that human labor would not equally tend to allow.
    The Other Half Is Coming Too
    Assume that by and large the advent of AGI will decimate the need for white-collar human labor. The refrain right now is that people should begin tilting toward blue-collar jobs as an alternative to white-collar jobs. This is a logical form of thinking in the sense that AGI as an intellectual mechanism would be unable to compete in jobs that involve hands-on work.
    A plumber needs to come to your house and do hands-on work to fix your plumbing. This is a physicality that entails arriving at your physical home, physically bringing and using tools, and physically repairing your faulty home plumbing. A truck driver likewise needs to sit in the cab of a truck and drive the vehicle. These are physically based tasks.
    There is no getting around the fact that these are hands-on activities.
    Aha, yes, those are physical tasks, but that doesn’t necessarily mean that only human hands can perform them. The gradual emergence of humanoid robots will provide an alternative to human hands. A humanoid robot is a type of robot that is built to resemble a human in form and function. You’ve undoubtedly seen those types of robots in the many online video recordings showing them walking, jumping, grasping at objects, and so on.
    A tremendous amount of active research and development is taking place to devise humanoid robots. They look comical right now. You watch those videos and laugh when the robot trips over a mere stick lying on the ground, something that a human would seldom trip over. You scoff when a robot tries to grasp a coffee cup and inadvertently spills most of the coffee. It all seems humorous and a silly pursuit.
    Keep in mind that we are all observing the development process while it is still taking place. At some point, those guffaws of the humanoid robots will lessen. Humanoid robots will be as smooth and graceful as humans. This will continue to be honed. Eventually, humanoid robots will be less prone to physical errors that humans make. In a sense, the physicality of a humanoid robot will be on par with humans, if not better, due to its mechanical properties.
    Do not discount the coming era of quite physically capable humanoid robots.
    AGI And Humanoid Robots Pair Up
    You might remember that in The Wonderful Wizard of Oz, the fictional character known as The Strawman lacked a brain.
    Without seeming to anthropomorphize humanoid robots, the current situation is that those robots typically use a form of AI that is below the sophistication level of modern generative AI. That’s fine for now due to the need to first ensure that the physical movements of the robots get refined.
    I have discussed that a said-to-be realm of Physical AI is going to be a huge breakthrough with incredible ramifications, see my analysis at the link here. The idea underlying Physical AI is that the AI of today is being uplifted by doing data training on the physical world. This also tends to include the use of World Models, consisting of broad constructions about how the physical world works, such as that we are bound to operate under conditions of gravity, and other physical laws of nature, see the link here.
    The bottom line here is that there will be a close pairing of robust AI with humanoid robots.
    Imagine what a humanoid robot can accomplish if it is paired with AGI.
    I’ll break the suspense and point out that AGI paired with humanoid robots means that those robots readily enter the blue-collar worker realm. Suppose your plumbing needs fixing. No worries, a humanoid robot that encompasses AGI will be sent to your home. The AGI is astute enough to carry on conversations with you, and the AGI also fully operates the robot to undertake the plumbing tasks.
    How did the AGI-paired humanoid robot get to your home?
    Easy-peasy, it drove a car or truck to get there.
    I’ve previously predicted that all the work on devising autonomous vehicles and self-driving cars will get shaken up once we have suitable humanoid robots devised. There won’t be a need for a vehicle to contain self-driving capabilities. A humanoid robot will simply sit in the driver’s seat and drive the vehicle. This is a much more open-ended solution than having to craft components that go into and onto a vehicle to enable self-driving. See my coverage at the link here.
    Timing Is Notable
    One of the reasons that many do not give much thought to the pairing of AGI with humanoid robots is that today’s humanoid robots seem extraordinarily rudimentary and incapable of performing physical dexterity tasks on par with human capabilities. Meanwhile, there is brazen talk that AGI is just around the corner.
    AGI is said to be within our grasp.
    Let’s give the timing considerations a bit of scrutiny.
    There are three primary timing angles:

    Option 1: AGI first, then humanoid robots. AGI is attained before humanoid robots are sufficiently devised.
    Option 2: Humanoid robots first, then AGI. Humanoid robots are physically fluently adept before AGI is attained.
    Option 3: AGI and humanoid robots arrive about at the same time. AGI is attained and at the same time, it turns out that humanoid robots are fluently adept too, mainly by coincidence and not due to any cross-mixing.

    A skeptic would insist that there is a fourth possibility, consisting of the possibility that we never achieve AGI and/or we fail to achieve sufficiently physically capable humanoid robots. I am going to reject that possibility. Perhaps I am overly optimistic, but it seems to me that we will eventually attain AGI, and we will eventually attain physically capable humanoid robots.
    I shall next respectively consider each of the three genuinely reasonable possibilities.
    Option 1: AGI First, Then Humanoid Robots
    What if we manage to attain AGI before we manage to achieve physically fluent humanoid robots?
    That’s just fine.
    We would indubitably put AGI to work as a partner with humans in figuring out how we can push along the budding humanoid robot development process. It seems nearly obvious that with AGI’s capable assistance, we would overcome any bottlenecks and soon enough arrive at top-notch physically adept humanoid robots.
    At that juncture, we would then toss AGI into the humanoid robots and have ourselves quite an amazing combination.
    Option 2: Humanoid Robots First, Then AGI
    Suppose that we devise very physically adept humanoid robots but have not yet arrived at AGI.
    Are we in a pickle?
    Nope.
    We could use conventional advanced AI inside those humanoid robots. The combination would certainly be good enough for a wide variety of tasks. The odds are that we would need to be cautious about where such robots are utilized. Nonetheless, we would have essentially walking, talking, and productive humanoid robots.
    If AGI never happens, oh well, we end up with pretty good humanoid robots. On the other hand, once we arrive at AGI, those humanoid robots will be stellar. It’s just a matter of time.
    Option 3: AGI And Humanoid Robots At The Same Time
    Let’s consider the potential of AGI and humanoid robots perchance being attained around the same time. Assume that this timing isn’t due to an outright cross-mixing with each other. They just so happen to advance on a similar timeline.
    I tend to believe that’s the most likely of the three scenarios.
    Here’s why.
    First, despite all the hubris about AGI being within earshot, perhaps in the next year or two, which is a popular pronouncement by many AI luminaries, I tend to side with recent surveys of AI developers that put the date around the year 2040. Some AI luminaires sneakily play with the definition of AGI in hopes of making their predictions come true sooner, akin to moving the goalposts to easily score points. For my coverage on Sam Altman’s efforts of moving the cheese regarding AGI attainment, see the link here.
    Second, if you are willing to entertain the year 2040 as a potential date for achieving AGI, that’s about 15 years from now. In my estimation, the advancements being made in humanoid robots will readily progress such that by 2040 they will be very physically adept. Probably be sooner, but let’s go with the year 2040 for ease of contemplation.
    In my view, we will likely have humanoid robots doing well enough that they will be put into use prior to arriving at AGI. The pinnacle of robust humanoid robots and the attainment of AGI will roughly coincide with each other.

    Two peas in a pod.Impact Of Enormous Consequences
    In an upcoming column posting, I will examine the enormous consequences of having AGI paired with fully physically capable humanoid robots. As noted above, this will have a humongous impact on white-collar work and blue-collar work. There will be gargantuan economic impacts, societal impacts, cultural impacts, and so on.
    Some final thoughts for now.
    A single whammy is already being hotly debated. The debates currently tend to be preoccupied with the loss of white-collar jobs due to the attainment of AGI. A saving grace seems to be that at least blue-collar jobs are going to be around and thriving, even once AGI is attained. The world doesn’t seem overly gloomy if you can cling to the upbeat posture that blue-collar tasks remain intact.
    The double whammy is a lot more to take in.
    But the double whammy is the truth. The truth needs to be faced. If you are having doubts as a human about the future, just remember the famous words of Vince Lombardi: “Winners never quit, and quitters never win.”
    Humankind can handle the double whammy.
    Stay tuned for my upcoming coverage of what this entails.
    #doublewhammy #when #agi #embeds #with
    Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs
    AGI will be embedded into humanoid robots, which makes white-collar and blue-collar jobs a target ... More for walking/talking automation.getty In today’s column, I examine the highly worrisome qualms expressed that the advent of artificial general intelligenceis likely to usurp white-collar jobs. The stated concern is that since AGI will be on par with human intellect, any job that relies principally on intellectual pursuits such as typical white-collar work will be taken over via the use of AGI. Employers will realize that rather than dealing with human white-collar workers, they can more readily get the job done via AGI. This, in turn, has led to a rising call that people should aim toward blue-collar jobs, doing so becausethose forms of employment will not be undercut via AGI. Sorry to say, that misses the bigger picture, namely that AGI when combined with humanoid robots is coming not only for white-collar jobs but also blue-collar jobs too. It is a proverbial double-whammy when it comes to the attainment of AGI. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Problem Only Half Seen Before launching into the primary matter at hand in this discussion, let’s contemplate a famous quote attributed to Charles Kettering, a legendary inventor, who said, “A problem well-stated is a problem half-solved.” I bring this up because those loud clamors right now about the assumption that AGI will replace white-collar workers are only seeing half of the problem. The problem as they see it is that since AGI is intellectually on par with humans, and since white-collar workers mainly use intellect in their work endeavors, AGI is going to be used in place of humans for white-collar work. I will in a moment explain why that’s only half of the problem and there is a demonstrative need to more carefully and fully articulate the nature of the problem. Will AGI Axiomatically Take White-Collar Jobs On a related facet, the belief that AGI will axiomatically replace white-collar labor makes a number of other related key assumptions. I shall briefly explore those and then come back to why the problem itself is only half-baked. The cost of using AGI for doing white-collar work will need to be presumably a better ROI choice over human workers. If not, then an employer would be wiser to stick with humans rather than employing AGI. There seems to often be an unstated belief that AGI is necessarily going to be a less costly route than employing humans. We don’t know yet what the cost of using AGI will be. It could be highly expensive. Indeed, some are worried that the world will divide into the AGI haves and AGI have-nots, partially due to the exorbitant cost that AGI might involve. If AGI is free to use, well, that would seem to be the nail in the coffin related to using human workers for the same capacity. Another angle is that AGI is relatively inexpensive in comparison to human labor. In that case, the use of AGI is likely to win over human labor usage. But if the cost of AGI is nearer to the cost of human labor, or more so, then employers would rationally need to weigh the use of one versus the other. Note that when referring to the cost of human labor, there is more to that calculation than simply the dollar-hour labor rate per se. There are lots of other less apparent costs, such as the cost to manage human labor, the cost of dealing with HR-related issues, and many other factors that come into the weighty matter. Thus, an AGI versus human labor ROI will be more complex than it might seem at an initial glance. In addition, keep in mind that AGI would seemingly be readily switched on and off, and have other capacities that human labor would not equally tend to allow. The Other Half Is Coming Too Assume that by and large the advent of AGI will decimate the need for white-collar human labor. The refrain right now is that people should begin tilting toward blue-collar jobs as an alternative to white-collar jobs. This is a logical form of thinking in the sense that AGI as an intellectual mechanism would be unable to compete in jobs that involve hands-on work. A plumber needs to come to your house and do hands-on work to fix your plumbing. This is a physicality that entails arriving at your physical home, physically bringing and using tools, and physically repairing your faulty home plumbing. A truck driver likewise needs to sit in the cab of a truck and drive the vehicle. These are physically based tasks. There is no getting around the fact that these are hands-on activities. Aha, yes, those are physical tasks, but that doesn’t necessarily mean that only human hands can perform them. The gradual emergence of humanoid robots will provide an alternative to human hands. A humanoid robot is a type of robot that is built to resemble a human in form and function. You’ve undoubtedly seen those types of robots in the many online video recordings showing them walking, jumping, grasping at objects, and so on. A tremendous amount of active research and development is taking place to devise humanoid robots. They look comical right now. You watch those videos and laugh when the robot trips over a mere stick lying on the ground, something that a human would seldom trip over. You scoff when a robot tries to grasp a coffee cup and inadvertently spills most of the coffee. It all seems humorous and a silly pursuit. Keep in mind that we are all observing the development process while it is still taking place. At some point, those guffaws of the humanoid robots will lessen. Humanoid robots will be as smooth and graceful as humans. This will continue to be honed. Eventually, humanoid robots will be less prone to physical errors that humans make. In a sense, the physicality of a humanoid robot will be on par with humans, if not better, due to its mechanical properties. Do not discount the coming era of quite physically capable humanoid robots. AGI And Humanoid Robots Pair Up You might remember that in The Wonderful Wizard of Oz, the fictional character known as The Strawman lacked a brain. Without seeming to anthropomorphize humanoid robots, the current situation is that those robots typically use a form of AI that is below the sophistication level of modern generative AI. That’s fine for now due to the need to first ensure that the physical movements of the robots get refined. I have discussed that a said-to-be realm of Physical AI is going to be a huge breakthrough with incredible ramifications, see my analysis at the link here. The idea underlying Physical AI is that the AI of today is being uplifted by doing data training on the physical world. This also tends to include the use of World Models, consisting of broad constructions about how the physical world works, such as that we are bound to operate under conditions of gravity, and other physical laws of nature, see the link here. The bottom line here is that there will be a close pairing of robust AI with humanoid robots. Imagine what a humanoid robot can accomplish if it is paired with AGI. I’ll break the suspense and point out that AGI paired with humanoid robots means that those robots readily enter the blue-collar worker realm. Suppose your plumbing needs fixing. No worries, a humanoid robot that encompasses AGI will be sent to your home. The AGI is astute enough to carry on conversations with you, and the AGI also fully operates the robot to undertake the plumbing tasks. How did the AGI-paired humanoid robot get to your home? Easy-peasy, it drove a car or truck to get there. I’ve previously predicted that all the work on devising autonomous vehicles and self-driving cars will get shaken up once we have suitable humanoid robots devised. There won’t be a need for a vehicle to contain self-driving capabilities. A humanoid robot will simply sit in the driver’s seat and drive the vehicle. This is a much more open-ended solution than having to craft components that go into and onto a vehicle to enable self-driving. See my coverage at the link here. Timing Is Notable One of the reasons that many do not give much thought to the pairing of AGI with humanoid robots is that today’s humanoid robots seem extraordinarily rudimentary and incapable of performing physical dexterity tasks on par with human capabilities. Meanwhile, there is brazen talk that AGI is just around the corner. AGI is said to be within our grasp. Let’s give the timing considerations a bit of scrutiny. There are three primary timing angles: Option 1: AGI first, then humanoid robots. AGI is attained before humanoid robots are sufficiently devised. Option 2: Humanoid robots first, then AGI. Humanoid robots are physically fluently adept before AGI is attained. Option 3: AGI and humanoid robots arrive about at the same time. AGI is attained and at the same time, it turns out that humanoid robots are fluently adept too, mainly by coincidence and not due to any cross-mixing. A skeptic would insist that there is a fourth possibility, consisting of the possibility that we never achieve AGI and/or we fail to achieve sufficiently physically capable humanoid robots. I am going to reject that possibility. Perhaps I am overly optimistic, but it seems to me that we will eventually attain AGI, and we will eventually attain physically capable humanoid robots. I shall next respectively consider each of the three genuinely reasonable possibilities. Option 1: AGI First, Then Humanoid Robots What if we manage to attain AGI before we manage to achieve physically fluent humanoid robots? That’s just fine. We would indubitably put AGI to work as a partner with humans in figuring out how we can push along the budding humanoid robot development process. It seems nearly obvious that with AGI’s capable assistance, we would overcome any bottlenecks and soon enough arrive at top-notch physically adept humanoid robots. At that juncture, we would then toss AGI into the humanoid robots and have ourselves quite an amazing combination. Option 2: Humanoid Robots First, Then AGI Suppose that we devise very physically adept humanoid robots but have not yet arrived at AGI. Are we in a pickle? Nope. We could use conventional advanced AI inside those humanoid robots. The combination would certainly be good enough for a wide variety of tasks. The odds are that we would need to be cautious about where such robots are utilized. Nonetheless, we would have essentially walking, talking, and productive humanoid robots. If AGI never happens, oh well, we end up with pretty good humanoid robots. On the other hand, once we arrive at AGI, those humanoid robots will be stellar. It’s just a matter of time. Option 3: AGI And Humanoid Robots At The Same Time Let’s consider the potential of AGI and humanoid robots perchance being attained around the same time. Assume that this timing isn’t due to an outright cross-mixing with each other. They just so happen to advance on a similar timeline. I tend to believe that’s the most likely of the three scenarios. Here’s why. First, despite all the hubris about AGI being within earshot, perhaps in the next year or two, which is a popular pronouncement by many AI luminaries, I tend to side with recent surveys of AI developers that put the date around the year 2040. Some AI luminaires sneakily play with the definition of AGI in hopes of making their predictions come true sooner, akin to moving the goalposts to easily score points. For my coverage on Sam Altman’s efforts of moving the cheese regarding AGI attainment, see the link here. Second, if you are willing to entertain the year 2040 as a potential date for achieving AGI, that’s about 15 years from now. In my estimation, the advancements being made in humanoid robots will readily progress such that by 2040 they will be very physically adept. Probably be sooner, but let’s go with the year 2040 for ease of contemplation. In my view, we will likely have humanoid robots doing well enough that they will be put into use prior to arriving at AGI. The pinnacle of robust humanoid robots and the attainment of AGI will roughly coincide with each other. Two peas in a pod.Impact Of Enormous Consequences In an upcoming column posting, I will examine the enormous consequences of having AGI paired with fully physically capable humanoid robots. As noted above, this will have a humongous impact on white-collar work and blue-collar work. There will be gargantuan economic impacts, societal impacts, cultural impacts, and so on. Some final thoughts for now. A single whammy is already being hotly debated. The debates currently tend to be preoccupied with the loss of white-collar jobs due to the attainment of AGI. A saving grace seems to be that at least blue-collar jobs are going to be around and thriving, even once AGI is attained. The world doesn’t seem overly gloomy if you can cling to the upbeat posture that blue-collar tasks remain intact. The double whammy is a lot more to take in. But the double whammy is the truth. The truth needs to be faced. If you are having doubts as a human about the future, just remember the famous words of Vince Lombardi: “Winners never quit, and quitters never win.” Humankind can handle the double whammy. Stay tuned for my upcoming coverage of what this entails. #doublewhammy #when #agi #embeds #with
    WWW.FORBES.COM
    Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs
    AGI will be embedded into humanoid robots, which makes white-collar and blue-collar jobs a target ... More for walking/talking automation.getty In today’s column, I examine the highly worrisome qualms expressed that the advent of artificial general intelligence (AGI) is likely to usurp white-collar jobs. The stated concern is that since AGI will be on par with human intellect, any job that relies principally on intellectual pursuits such as typical white-collar work will be taken over via the use of AGI. Employers will realize that rather than dealing with human white-collar workers, they can more readily get the job done via AGI. This, in turn, has led to a rising call that people should aim toward blue-collar jobs, doing so because (presumably) those forms of employment will not be undercut via AGI. Sorry to say, that misses the bigger picture, namely that AGI when combined with humanoid robots is coming not only for white-collar jobs but also blue-collar jobs too. It is a proverbial double-whammy when it comes to the attainment of AGI. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Problem Only Half Seen Before launching into the primary matter at hand in this discussion, let’s contemplate a famous quote attributed to Charles Kettering, a legendary inventor, who said, “A problem well-stated is a problem half-solved.” I bring this up because those loud clamors right now about the assumption that AGI will replace white-collar workers are only seeing half of the problem. The problem as they see it is that since AGI is intellectually on par with humans, and since white-collar workers mainly use intellect in their work endeavors, AGI is going to be used in place of humans for white-collar work. I will in a moment explain why that’s only half of the problem and there is a demonstrative need to more carefully and fully articulate the nature of the problem. Will AGI Axiomatically Take White-Collar Jobs On a related facet, the belief that AGI will axiomatically replace white-collar labor makes a number of other related key assumptions. I shall briefly explore those and then come back to why the problem itself is only half-baked. The cost of using AGI for doing white-collar work will need to be presumably a better ROI choice over human workers. If not, then an employer would be wiser to stick with humans rather than employing AGI. There seems to often be an unstated belief that AGI is necessarily going to be a less costly route than employing humans. We don’t know yet what the cost of using AGI will be. It could be highly expensive. Indeed, some are worried that the world will divide into the AGI haves and AGI have-nots, partially due to the exorbitant cost that AGI might involve. If AGI is free to use, well, that would seem to be the nail in the coffin related to using human workers for the same capacity. Another angle is that AGI is relatively inexpensive in comparison to human labor. In that case, the use of AGI is likely to win over human labor usage. But if the cost of AGI is nearer to the cost of human labor (all in), or more so, then employers would rationally need to weigh the use of one versus the other. Note that when referring to the cost of human labor, there is more to that calculation than simply the dollar-hour labor rate per se. There are lots of other less apparent costs, such as the cost to manage human labor, the cost of dealing with HR-related issues, and many other factors that come into the weighty matter. Thus, an AGI versus human labor ROI will be more complex than it might seem at an initial glance. In addition, keep in mind that AGI would seemingly be readily switched on and off, and have other capacities that human labor would not equally tend to allow. The Other Half Is Coming Too Assume that by and large the advent of AGI will decimate the need for white-collar human labor. The refrain right now is that people should begin tilting toward blue-collar jobs as an alternative to white-collar jobs. This is a logical form of thinking in the sense that AGI as an intellectual mechanism would be unable to compete in jobs that involve hands-on work. A plumber needs to come to your house and do hands-on work to fix your plumbing. This is a physicality that entails arriving at your physical home, physically bringing and using tools, and physically repairing your faulty home plumbing. A truck driver likewise needs to sit in the cab of a truck and drive the vehicle. These are physically based tasks. There is no getting around the fact that these are hands-on activities. Aha, yes, those are physical tasks, but that doesn’t necessarily mean that only human hands can perform them. The gradual emergence of humanoid robots will provide an alternative to human hands. A humanoid robot is a type of robot that is built to resemble a human in form and function. You’ve undoubtedly seen those types of robots in the many online video recordings showing them walking, jumping, grasping at objects, and so on. A tremendous amount of active research and development is taking place to devise humanoid robots. They look comical right now. You watch those videos and laugh when the robot trips over a mere stick lying on the ground, something that a human would seldom trip over. You scoff when a robot tries to grasp a coffee cup and inadvertently spills most of the coffee. It all seems humorous and a silly pursuit. Keep in mind that we are all observing the development process while it is still taking place. At some point, those guffaws of the humanoid robots will lessen. Humanoid robots will be as smooth and graceful as humans. This will continue to be honed. Eventually, humanoid robots will be less prone to physical errors that humans make. In a sense, the physicality of a humanoid robot will be on par with humans, if not better, due to its mechanical properties. Do not discount the coming era of quite physically capable humanoid robots. AGI And Humanoid Robots Pair Up You might remember that in The Wonderful Wizard of Oz, the fictional character known as The Strawman lacked a brain. Without seeming to anthropomorphize humanoid robots, the current situation is that those robots typically use a form of AI that is below the sophistication level of modern generative AI. That’s fine for now due to the need to first ensure that the physical movements of the robots get refined. I have discussed that a said-to-be realm of Physical AI is going to be a huge breakthrough with incredible ramifications, see my analysis at the link here. The idea underlying Physical AI is that the AI of today is being uplifted by doing data training on the physical world. This also tends to include the use of World Models, consisting of broad constructions about how the physical world works, such as that we are bound to operate under conditions of gravity, and other physical laws of nature, see the link here. The bottom line here is that there will be a close pairing of robust AI with humanoid robots. Imagine what a humanoid robot can accomplish if it is paired with AGI. I’ll break the suspense and point out that AGI paired with humanoid robots means that those robots readily enter the blue-collar worker realm. Suppose your plumbing needs fixing. No worries, a humanoid robot that encompasses AGI will be sent to your home. The AGI is astute enough to carry on conversations with you, and the AGI also fully operates the robot to undertake the plumbing tasks. How did the AGI-paired humanoid robot get to your home? Easy-peasy, it drove a car or truck to get there. I’ve previously predicted that all the work on devising autonomous vehicles and self-driving cars will get shaken up once we have suitable humanoid robots devised. There won’t be a need for a vehicle to contain self-driving capabilities. A humanoid robot will simply sit in the driver’s seat and drive the vehicle. This is a much more open-ended solution than having to craft components that go into and onto a vehicle to enable self-driving. See my coverage at the link here. Timing Is Notable One of the reasons that many do not give much thought to the pairing of AGI with humanoid robots is that today’s humanoid robots seem extraordinarily rudimentary and incapable of performing physical dexterity tasks on par with human capabilities. Meanwhile, there is brazen talk that AGI is just around the corner. AGI is said to be within our grasp. Let’s give the timing considerations a bit of scrutiny. There are three primary timing angles: Option 1: AGI first, then humanoid robots. AGI is attained before humanoid robots are sufficiently devised. Option 2: Humanoid robots first, then AGI. Humanoid robots are physically fluently adept before AGI is attained. Option 3: AGI and humanoid robots arrive about at the same time. AGI is attained and at the same time, it turns out that humanoid robots are fluently adept too, mainly by coincidence and not due to any cross-mixing. A skeptic would insist that there is a fourth possibility, consisting of the possibility that we never achieve AGI and/or we fail to achieve sufficiently physically capable humanoid robots. I am going to reject that possibility. Perhaps I am overly optimistic, but it seems to me that we will eventually attain AGI, and we will eventually attain physically capable humanoid robots. I shall next respectively consider each of the three genuinely reasonable possibilities. Option 1: AGI First, Then Humanoid Robots What if we manage to attain AGI before we manage to achieve physically fluent humanoid robots? That’s just fine. We would indubitably put AGI to work as a partner with humans in figuring out how we can push along the budding humanoid robot development process. It seems nearly obvious that with AGI’s capable assistance, we would overcome any bottlenecks and soon enough arrive at top-notch physically adept humanoid robots. At that juncture, we would then toss AGI into the humanoid robots and have ourselves quite an amazing combination. Option 2: Humanoid Robots First, Then AGI Suppose that we devise very physically adept humanoid robots but have not yet arrived at AGI. Are we in a pickle? Nope. We could use conventional advanced AI inside those humanoid robots. The combination would certainly be good enough for a wide variety of tasks. The odds are that we would need to be cautious about where such robots are utilized. Nonetheless, we would have essentially walking, talking, and productive humanoid robots. If AGI never happens, oh well, we end up with pretty good humanoid robots. On the other hand, once we arrive at AGI, those humanoid robots will be stellar. It’s just a matter of time. Option 3: AGI And Humanoid Robots At The Same Time Let’s consider the potential of AGI and humanoid robots perchance being attained around the same time. Assume that this timing isn’t due to an outright cross-mixing with each other. They just so happen to advance on a similar timeline. I tend to believe that’s the most likely of the three scenarios. Here’s why. First, despite all the hubris about AGI being within earshot, perhaps in the next year or two, which is a popular pronouncement by many AI luminaries, I tend to side with recent surveys of AI developers that put the date around the year 2040 (see my coverage at the link here). Some AI luminaires sneakily play with the definition of AGI in hopes of making their predictions come true sooner, akin to moving the goalposts to easily score points. For my coverage on Sam Altman’s efforts of moving the cheese regarding AGI attainment, see the link here. Second, if you are willing to entertain the year 2040 as a potential date for achieving AGI, that’s about 15 years from now. In my estimation, the advancements being made in humanoid robots will readily progress such that by 2040 they will be very physically adept. Probably be sooner, but let’s go with the year 2040 for ease of contemplation. In my view, we will likely have humanoid robots doing well enough that they will be put into use prior to arriving at AGI. The pinnacle of robust humanoid robots and the attainment of AGI will roughly coincide with each other. Two peas in a pod.Impact Of Enormous Consequences In an upcoming column posting, I will examine the enormous consequences of having AGI paired with fully physically capable humanoid robots. As noted above, this will have a humongous impact on white-collar work and blue-collar work. There will be gargantuan economic impacts, societal impacts, cultural impacts, and so on. Some final thoughts for now. A single whammy is already being hotly debated. The debates currently tend to be preoccupied with the loss of white-collar jobs due to the attainment of AGI. A saving grace seems to be that at least blue-collar jobs are going to be around and thriving, even once AGI is attained. The world doesn’t seem overly gloomy if you can cling to the upbeat posture that blue-collar tasks remain intact. The double whammy is a lot more to take in. But the double whammy is the truth. The truth needs to be faced. If you are having doubts as a human about the future, just remember the famous words of Vince Lombardi: “Winners never quit, and quitters never win.” Humankind can handle the double whammy. Stay tuned for my upcoming coverage of what this entails.
    Like
    Love
    Wow
    Angry
    Sad
    366
    0 Reacties 0 aandelen
  • The hidden time bomb in the tax code that's fueling mass tech layoffs: A decades-old tax rule helped build America's tech economy. A quiet change under Trump helped dismantle it

    For the past two years, it’s been a ghost in the machine of American tech. Between 2022 and today, a little-noticed tweak to the U.S. tax code has quietly rewired the financial logic of how American companies invest in research and development. Outside of CFO and accounting circles, almost no one knew it existed. “I work on these tax write-offs and still hadn’t heard about this,” a chief operating officer at a private-equity-backed tech company told Quartz. “It’s just been so weirdly silent.”AdvertisementStill, the delayed change to a decades-old tax provision — buried deep in the 2017 tax law — has contributed to the loss of hundreds of thousands of high-paying, white-collar jobs. That’s the picture that emerges from a review of corporate filings, public financial data, analysis of timelines, and interviews with industry insiders. One accountant, working in-house at a tech company, described it as a “niche issue with broad impact,” echoing sentiments from venture capital investors also interviewed for this article. Some spoke on condition of anonymity to discuss sensitive political matters.Since the start of 2023, more than half-a-million tech workers have been laid off, according to industry tallies. Headlines have blamed over-hiring during the pandemic and, more recently, AI. But beneath the surface was a hidden accelerant: a change to what’s known as Section 174 that helped gut in-house software and product development teams everywhere from tech giants such as Microsoftand Metato much smaller, private, direct-to-consumer and other internet-first companies.Now, as a bipartisan effort to repeal the Section 174 change moves through Congress, bigger questions are surfacing: How did a single line in the tax code help trigger a tsunami of mass layoffs? And why did no one see it coming? For almost 70 years, American companies could deduct 100% of qualified research and development spending in the year they incurred the costs. Salaries, software, contractor payments — if it contributed to creating or improving a product, it came off the top of a firm’s taxable income.AdvertisementThe deduction was guaranteed by Section 174 of the IRS Code of 1954, and under the provision, R&D flourished in the U.S.Microsoft was founded in 1975. Applelaunched its first computer in 1976. Googleincorporated in 1998. Facebook opened to the general public in 2006. All these companies, now among the most valuable in the world, developed their earliest products — programming tools, hardware, search engines — under a tax system that rewarded building now, not later.The subsequent rise of smartphones, cloud computing, and mobile apps also happened in an America where companies could immediately write off their investments in engineering, infrastructure, and experimentation. It was a baseline assumption — innovation and risk-taking subsidized by the tax code — that shaped how founders operated and how investors made decisions.In turn, tech companies largely built their products in the U.S. AdvertisementMicrosoft’s operating systems were coded in Washington state. Apple’s early hardware and software teams were in California. Google’s search engine was born at Stanford and scaled from Mountain View. Facebook’s entire social architecture was developed in Menlo Park. The deduction directly incentivized keeping R&D close to home, rewarding companies for investing in American workers, engineers, and infrastructure.That’s what makes the politics of Section 174 so revealing. For all the rhetoric about bringing jobs back and making things in America, the first Trump administration’s major tax bill arguably helped accomplish the opposite.When Congress passed the Tax Cuts and Jobs Act, the signature legislative achievement of President Donald Trump’s first term, it slashed the corporate tax rate from 35% to 21% — a massive revenue loss on paper for the federal government.To make the 2017 bill comply with Senate budget rules, lawmakers needed to offset the cost. So they added future tax hikes that wouldn’t kick in right away, wouldn’t provoke immediate backlash from businesses, and could, in theory, be quietly repealed later.AdvertisementThe delayed change to Section 174 — from immediate expensing of R&D to mandatory amortization, meaning that companies must spread the deduction out in smaller chunks over five or even 15-year periods — was that kind of provision. It didn’t start affecting the budget until 2022, but it helped the TCJA appear “deficit neutral” over the 10-year window used for legislative scoring.The delay wasn’t a technical necessity. It was a political tactic. Such moves are common in tax legislation. Phase-ins and delayed provisions let lawmakers game how the Congressional Budget Office— Congress’ nonpartisan analyst of how bills impact budgets and deficits — scores legislation, pushing costs or revenue losses outside official forecasting windows.And so, on schedule in 2022, the change to Section 174 went into effect. Companies filed their 2022 tax returns under the new rules in early 2023. And suddenly, R&D wasn’t a full, immediate write-off anymore. The tax benefits of salaries for engineers, product and project managers, data scientists, and even some user experience and marketing staff — all of which had previously reduced taxable income in year one — now had to be spread out over five- or 15-year periods. To understand the impact, imagine a personal tax code change that allowed you to deduct 100% of your biggest source of expenses, and that becoming a 20% deduction. For cash-strapped companies, especially those not yet profitable, the result was a painful tax bill just as venture funding dried up and interest rates soared.AdvertisementSalesforce office buildings in San Francisco.Photo: Jason Henry/BloombergIt’s no coincidence that Meta announced its “Year of Efficiency” immediately after the Section 174 change took effect. Ditto Microsoft laying off 10,000 employees in January 2023 despite strong earnings, or Google parent Alphabet cutting 12,000 jobs around the same time.Amazonalso laid off almost 30,000 people, with cuts focused not just on logistics but on Alexa and internal cloud tools — precisely the kinds of projects that would have once qualified as immediately deductible R&D. Salesforceeliminated 10% of its staff, or 8,000 people, including entire product teams.In public, companies blamed bloat and AI. But inside boardrooms, spreadsheets were telling a quieter story. And MD&A notes — management’s notes on the numbers — buried deep in 10-K filings recorded the change, too. R&D had become more expensive to carry. Headcount, the leading R&D expense across the tech industry, was the easiest thing to cut.AdvertisementIn its 2023 annual report, Meta described salaries as its single biggest R&D expense. Between the first and second years that the Section 174 change began affecting tax returns, Meta cut its total workforce by almost 25%. Over the same period, Microsoft reduced its global headcount by about 7%, with cuts concentrated in product-facing, engineering-heavy roles.Smaller companies without the fortress-like balance sheets of Big Tech have arguably been hit even harder. Twilioslashed 22% of its workforce in 2023 alone. Shopifycut almost 30% of staff in 2022 and 2023. Coinbasereduced headcount by 36% across a pair of brutal restructuring waves.Since going into effect, the provision has hit at the very heart of America’s economic growth engine: the tech sector.By market cap, tech giants dominate the S&P 500, with the “Magnificent 7” alone accounting for more than a third of the index’s total value. Workforce numbers tell a similar story, with tech employing millions of Americans directly and supporting the employment of tens of millions more. As measured by GDP, capital-T tech contributes about 10% of national output.AdvertisementIt’s not just that tech layoffs were large, it’s that they were massively disproportionate. Across the broader U.S. economy, job cuts hovered around in low single digits across most sectors. But in tech, entire divisions vanished, with a whopping 60% jump in layoffs between 2022 and 2023. Some cuts reflected real inefficiencies — a response to over-hiring during the zero-interest rate boom. At the same time, many of the roles eliminated were in R&D, product, and engineering, precisely the kind of functions that had once benefitted from generous tax treatment under Section 174.Throughout the 2010s, a broad swath of startups, direct-to-consumer brands, and internet-first firms — basically every company you recognize from Instagram or Facebook ads — built their growth models around a kind of engineered break-even.The tax code allowed them to spend aggressively on product and engineering, then write it all off as R&D, keeping their taxable income close to zero by design. It worked because taxable income and actual cash flow were often notGAAP accounting practices. Basically, as long as spending counted as R&D, companies could report losses to investors while owing almost nothing to the IRS.But the Section 174 change broke that model. Once those same expenses had to be spread out, or amortized, over multiple years, the tax shield vanished. Companies that were still burning cash suddenly looked profitable on paper, triggering real tax bills on imaginary gains.AdvertisementThe logic that once fueled a generation of digital-first growth collapsed overnight.So it wasn’t just tech experiencing effects. From 1954 until 2022, the U.S. tax code had encouraged businesses of all stripes to behave like tech companies. From retail to logistics, healthcare to media, if firms built internal tools, customized a software stack, or invested in business intelligence and data-driven product development, they could expense those costs. The write-off incentivized in-house builds and fast growth well outside the capital-T tech sector. This lines up with OECD research showing that immediate deductions foster innovation more than spread-out ones.And American companies ran with that logic. According to government data, U.S. businesses reported about billion in R&D expenditures in 2019 alone, and almost half of that came from industries outside traditional tech. The Bureau of Economic Analysis estimates that this sector, the broader digital economy, accounts for another 10% of GDP.Add that to core tech’s contribution, and the Section 174 shift has likely touched at least 20% of the U.S. economy.AdvertisementThe result? A tax policy aimed at raising short-term revenue effectively hid a time bomb inside the growth engines of thousands of companies. And when it detonated, it kneecapped the incentive for hiring American engineers or investing in American-made tech and digital products.It made building tech companies in America look irrational on a spreadsheet.A bipartisan group of lawmakers is pushing to repeal the Section 174 change, with business groups, CFOs, crypto executives, and venture capitalists lobbying hard for retroactive relief. But the politics are messy. Fixing 174 would mean handing a tax break to the same companies many voters in both parties see as symbols of corporate excess. Any repeal would also come too late for the hundreds of thousands of workers already laid off.And of course, the losses don’t stop at Meta’s or Google’s campus gates. They ripple out. When high-paid tech workers disappear, so do the lunch orders. The house tours. The contract gigs. The spending habits that sustain entire urban economies and thousands of other jobs. Sandwich artists. Rideshare drivers. Realtors. Personal trainers. House cleaners. In tech-heavy cities, the fallout runs deep — and it’s still unfolding.AdvertisementWashington is now poised to pass a second Trump tax bill — one packed with more obscure provisions, more delayed impacts, more quiet redistribution. And it comes as analysts are only just beginning to understand the real-world effects of the last round.The Section 174 change “significantly increased the tax burden on companies investing in innovation, potentially stifling economic growth and reducing the United States’ competitiveness on the global stage,” according to the tax consulting firm KBKG. Whether the U.S. will reverse course — or simply adapt to a new normal — remains to be seen.
    #hidden #time #bomb #tax #code
    The hidden time bomb in the tax code that's fueling mass tech layoffs: A decades-old tax rule helped build America's tech economy. A quiet change under Trump helped dismantle it
    For the past two years, it’s been a ghost in the machine of American tech. Between 2022 and today, a little-noticed tweak to the U.S. tax code has quietly rewired the financial logic of how American companies invest in research and development. Outside of CFO and accounting circles, almost no one knew it existed. “I work on these tax write-offs and still hadn’t heard about this,” a chief operating officer at a private-equity-backed tech company told Quartz. “It’s just been so weirdly silent.”AdvertisementStill, the delayed change to a decades-old tax provision — buried deep in the 2017 tax law — has contributed to the loss of hundreds of thousands of high-paying, white-collar jobs. That’s the picture that emerges from a review of corporate filings, public financial data, analysis of timelines, and interviews with industry insiders. One accountant, working in-house at a tech company, described it as a “niche issue with broad impact,” echoing sentiments from venture capital investors also interviewed for this article. Some spoke on condition of anonymity to discuss sensitive political matters.Since the start of 2023, more than half-a-million tech workers have been laid off, according to industry tallies. Headlines have blamed over-hiring during the pandemic and, more recently, AI. But beneath the surface was a hidden accelerant: a change to what’s known as Section 174 that helped gut in-house software and product development teams everywhere from tech giants such as Microsoftand Metato much smaller, private, direct-to-consumer and other internet-first companies.Now, as a bipartisan effort to repeal the Section 174 change moves through Congress, bigger questions are surfacing: How did a single line in the tax code help trigger a tsunami of mass layoffs? And why did no one see it coming? For almost 70 years, American companies could deduct 100% of qualified research and development spending in the year they incurred the costs. Salaries, software, contractor payments — if it contributed to creating or improving a product, it came off the top of a firm’s taxable income.AdvertisementThe deduction was guaranteed by Section 174 of the IRS Code of 1954, and under the provision, R&D flourished in the U.S.Microsoft was founded in 1975. Applelaunched its first computer in 1976. Googleincorporated in 1998. Facebook opened to the general public in 2006. All these companies, now among the most valuable in the world, developed their earliest products — programming tools, hardware, search engines — under a tax system that rewarded building now, not later.The subsequent rise of smartphones, cloud computing, and mobile apps also happened in an America where companies could immediately write off their investments in engineering, infrastructure, and experimentation. It was a baseline assumption — innovation and risk-taking subsidized by the tax code — that shaped how founders operated and how investors made decisions.In turn, tech companies largely built their products in the U.S. AdvertisementMicrosoft’s operating systems were coded in Washington state. Apple’s early hardware and software teams were in California. Google’s search engine was born at Stanford and scaled from Mountain View. Facebook’s entire social architecture was developed in Menlo Park. The deduction directly incentivized keeping R&D close to home, rewarding companies for investing in American workers, engineers, and infrastructure.That’s what makes the politics of Section 174 so revealing. For all the rhetoric about bringing jobs back and making things in America, the first Trump administration’s major tax bill arguably helped accomplish the opposite.When Congress passed the Tax Cuts and Jobs Act, the signature legislative achievement of President Donald Trump’s first term, it slashed the corporate tax rate from 35% to 21% — a massive revenue loss on paper for the federal government.To make the 2017 bill comply with Senate budget rules, lawmakers needed to offset the cost. So they added future tax hikes that wouldn’t kick in right away, wouldn’t provoke immediate backlash from businesses, and could, in theory, be quietly repealed later.AdvertisementThe delayed change to Section 174 — from immediate expensing of R&D to mandatory amortization, meaning that companies must spread the deduction out in smaller chunks over five or even 15-year periods — was that kind of provision. It didn’t start affecting the budget until 2022, but it helped the TCJA appear “deficit neutral” over the 10-year window used for legislative scoring.The delay wasn’t a technical necessity. It was a political tactic. Such moves are common in tax legislation. Phase-ins and delayed provisions let lawmakers game how the Congressional Budget Office— Congress’ nonpartisan analyst of how bills impact budgets and deficits — scores legislation, pushing costs or revenue losses outside official forecasting windows.And so, on schedule in 2022, the change to Section 174 went into effect. Companies filed their 2022 tax returns under the new rules in early 2023. And suddenly, R&D wasn’t a full, immediate write-off anymore. The tax benefits of salaries for engineers, product and project managers, data scientists, and even some user experience and marketing staff — all of which had previously reduced taxable income in year one — now had to be spread out over five- or 15-year periods. To understand the impact, imagine a personal tax code change that allowed you to deduct 100% of your biggest source of expenses, and that becoming a 20% deduction. For cash-strapped companies, especially those not yet profitable, the result was a painful tax bill just as venture funding dried up and interest rates soared.AdvertisementSalesforce office buildings in San Francisco.Photo: Jason Henry/BloombergIt’s no coincidence that Meta announced its “Year of Efficiency” immediately after the Section 174 change took effect. Ditto Microsoft laying off 10,000 employees in January 2023 despite strong earnings, or Google parent Alphabet cutting 12,000 jobs around the same time.Amazonalso laid off almost 30,000 people, with cuts focused not just on logistics but on Alexa and internal cloud tools — precisely the kinds of projects that would have once qualified as immediately deductible R&D. Salesforceeliminated 10% of its staff, or 8,000 people, including entire product teams.In public, companies blamed bloat and AI. But inside boardrooms, spreadsheets were telling a quieter story. And MD&A notes — management’s notes on the numbers — buried deep in 10-K filings recorded the change, too. R&D had become more expensive to carry. Headcount, the leading R&D expense across the tech industry, was the easiest thing to cut.AdvertisementIn its 2023 annual report, Meta described salaries as its single biggest R&D expense. Between the first and second years that the Section 174 change began affecting tax returns, Meta cut its total workforce by almost 25%. Over the same period, Microsoft reduced its global headcount by about 7%, with cuts concentrated in product-facing, engineering-heavy roles.Smaller companies without the fortress-like balance sheets of Big Tech have arguably been hit even harder. Twilioslashed 22% of its workforce in 2023 alone. Shopifycut almost 30% of staff in 2022 and 2023. Coinbasereduced headcount by 36% across a pair of brutal restructuring waves.Since going into effect, the provision has hit at the very heart of America’s economic growth engine: the tech sector.By market cap, tech giants dominate the S&P 500, with the “Magnificent 7” alone accounting for more than a third of the index’s total value. Workforce numbers tell a similar story, with tech employing millions of Americans directly and supporting the employment of tens of millions more. As measured by GDP, capital-T tech contributes about 10% of national output.AdvertisementIt’s not just that tech layoffs were large, it’s that they were massively disproportionate. Across the broader U.S. economy, job cuts hovered around in low single digits across most sectors. But in tech, entire divisions vanished, with a whopping 60% jump in layoffs between 2022 and 2023. Some cuts reflected real inefficiencies — a response to over-hiring during the zero-interest rate boom. At the same time, many of the roles eliminated were in R&D, product, and engineering, precisely the kind of functions that had once benefitted from generous tax treatment under Section 174.Throughout the 2010s, a broad swath of startups, direct-to-consumer brands, and internet-first firms — basically every company you recognize from Instagram or Facebook ads — built their growth models around a kind of engineered break-even.The tax code allowed them to spend aggressively on product and engineering, then write it all off as R&D, keeping their taxable income close to zero by design. It worked because taxable income and actual cash flow were often notGAAP accounting practices. Basically, as long as spending counted as R&D, companies could report losses to investors while owing almost nothing to the IRS.But the Section 174 change broke that model. Once those same expenses had to be spread out, or amortized, over multiple years, the tax shield vanished. Companies that were still burning cash suddenly looked profitable on paper, triggering real tax bills on imaginary gains.AdvertisementThe logic that once fueled a generation of digital-first growth collapsed overnight.So it wasn’t just tech experiencing effects. From 1954 until 2022, the U.S. tax code had encouraged businesses of all stripes to behave like tech companies. From retail to logistics, healthcare to media, if firms built internal tools, customized a software stack, or invested in business intelligence and data-driven product development, they could expense those costs. The write-off incentivized in-house builds and fast growth well outside the capital-T tech sector. This lines up with OECD research showing that immediate deductions foster innovation more than spread-out ones.And American companies ran with that logic. According to government data, U.S. businesses reported about billion in R&D expenditures in 2019 alone, and almost half of that came from industries outside traditional tech. The Bureau of Economic Analysis estimates that this sector, the broader digital economy, accounts for another 10% of GDP.Add that to core tech’s contribution, and the Section 174 shift has likely touched at least 20% of the U.S. economy.AdvertisementThe result? A tax policy aimed at raising short-term revenue effectively hid a time bomb inside the growth engines of thousands of companies. And when it detonated, it kneecapped the incentive for hiring American engineers or investing in American-made tech and digital products.It made building tech companies in America look irrational on a spreadsheet.A bipartisan group of lawmakers is pushing to repeal the Section 174 change, with business groups, CFOs, crypto executives, and venture capitalists lobbying hard for retroactive relief. But the politics are messy. Fixing 174 would mean handing a tax break to the same companies many voters in both parties see as symbols of corporate excess. Any repeal would also come too late for the hundreds of thousands of workers already laid off.And of course, the losses don’t stop at Meta’s or Google’s campus gates. They ripple out. When high-paid tech workers disappear, so do the lunch orders. The house tours. The contract gigs. The spending habits that sustain entire urban economies and thousands of other jobs. Sandwich artists. Rideshare drivers. Realtors. Personal trainers. House cleaners. In tech-heavy cities, the fallout runs deep — and it’s still unfolding.AdvertisementWashington is now poised to pass a second Trump tax bill — one packed with more obscure provisions, more delayed impacts, more quiet redistribution. And it comes as analysts are only just beginning to understand the real-world effects of the last round.The Section 174 change “significantly increased the tax burden on companies investing in innovation, potentially stifling economic growth and reducing the United States’ competitiveness on the global stage,” according to the tax consulting firm KBKG. Whether the U.S. will reverse course — or simply adapt to a new normal — remains to be seen. #hidden #time #bomb #tax #code
    QZ.COM
    The hidden time bomb in the tax code that's fueling mass tech layoffs: A decades-old tax rule helped build America's tech economy. A quiet change under Trump helped dismantle it
    For the past two years, it’s been a ghost in the machine of American tech. Between 2022 and today, a little-noticed tweak to the U.S. tax code has quietly rewired the financial logic of how American companies invest in research and development. Outside of CFO and accounting circles, almost no one knew it existed. “I work on these tax write-offs and still hadn’t heard about this,” a chief operating officer at a private-equity-backed tech company told Quartz. “It’s just been so weirdly silent.”AdvertisementStill, the delayed change to a decades-old tax provision — buried deep in the 2017 tax law — has contributed to the loss of hundreds of thousands of high-paying, white-collar jobs. That’s the picture that emerges from a review of corporate filings, public financial data, analysis of timelines, and interviews with industry insiders. One accountant, working in-house at a tech company, described it as a “niche issue with broad impact,” echoing sentiments from venture capital investors also interviewed for this article. Some spoke on condition of anonymity to discuss sensitive political matters.Since the start of 2023, more than half-a-million tech workers have been laid off, according to industry tallies. Headlines have blamed over-hiring during the pandemic and, more recently, AI. But beneath the surface was a hidden accelerant: a change to what’s known as Section 174 that helped gut in-house software and product development teams everywhere from tech giants such as Microsoft (MSFT) and Meta (META) to much smaller, private, direct-to-consumer and other internet-first companies.Now, as a bipartisan effort to repeal the Section 174 change moves through Congress, bigger questions are surfacing: How did a single line in the tax code help trigger a tsunami of mass layoffs? And why did no one see it coming? For almost 70 years, American companies could deduct 100% of qualified research and development spending in the year they incurred the costs. Salaries, software, contractor payments — if it contributed to creating or improving a product, it came off the top of a firm’s taxable income.AdvertisementThe deduction was guaranteed by Section 174 of the IRS Code of 1954, and under the provision, R&D flourished in the U.S.Microsoft was founded in 1975. Apple (AAPL) launched its first computer in 1976. Google (GOOGL) incorporated in 1998. Facebook opened to the general public in 2006. All these companies, now among the most valuable in the world, developed their earliest products — programming tools, hardware, search engines — under a tax system that rewarded building now, not later.The subsequent rise of smartphones, cloud computing, and mobile apps also happened in an America where companies could immediately write off their investments in engineering, infrastructure, and experimentation. It was a baseline assumption — innovation and risk-taking subsidized by the tax code — that shaped how founders operated and how investors made decisions.In turn, tech companies largely built their products in the U.S. AdvertisementMicrosoft’s operating systems were coded in Washington state. Apple’s early hardware and software teams were in California. Google’s search engine was born at Stanford and scaled from Mountain View. Facebook’s entire social architecture was developed in Menlo Park. The deduction directly incentivized keeping R&D close to home, rewarding companies for investing in American workers, engineers, and infrastructure.That’s what makes the politics of Section 174 so revealing. For all the rhetoric about bringing jobs back and making things in America, the first Trump administration’s major tax bill arguably helped accomplish the opposite.When Congress passed the Tax Cuts and Jobs Act (TCJA), the signature legislative achievement of President Donald Trump’s first term, it slashed the corporate tax rate from 35% to 21% — a massive revenue loss on paper for the federal government.To make the 2017 bill comply with Senate budget rules, lawmakers needed to offset the cost. So they added future tax hikes that wouldn’t kick in right away, wouldn’t provoke immediate backlash from businesses, and could, in theory, be quietly repealed later.AdvertisementThe delayed change to Section 174 — from immediate expensing of R&D to mandatory amortization, meaning that companies must spread the deduction out in smaller chunks over five or even 15-year periods — was that kind of provision. It didn’t start affecting the budget until 2022, but it helped the TCJA appear “deficit neutral” over the 10-year window used for legislative scoring.The delay wasn’t a technical necessity. It was a political tactic. Such moves are common in tax legislation. Phase-ins and delayed provisions let lawmakers game how the Congressional Budget Office (CBO) — Congress’ nonpartisan analyst of how bills impact budgets and deficits — scores legislation, pushing costs or revenue losses outside official forecasting windows.And so, on schedule in 2022, the change to Section 174 went into effect. Companies filed their 2022 tax returns under the new rules in early 2023. And suddenly, R&D wasn’t a full, immediate write-off anymore. The tax benefits of salaries for engineers, product and project managers, data scientists, and even some user experience and marketing staff — all of which had previously reduced taxable income in year one — now had to be spread out over five- or 15-year periods. To understand the impact, imagine a personal tax code change that allowed you to deduct 100% of your biggest source of expenses, and that becoming a 20% deduction. For cash-strapped companies, especially those not yet profitable, the result was a painful tax bill just as venture funding dried up and interest rates soared.AdvertisementSalesforce office buildings in San Francisco.Photo: Jason Henry/Bloomberg (Getty Images)It’s no coincidence that Meta announced its “Year of Efficiency” immediately after the Section 174 change took effect. Ditto Microsoft laying off 10,000 employees in January 2023 despite strong earnings, or Google parent Alphabet cutting 12,000 jobs around the same time.Amazon (AMZN) also laid off almost 30,000 people, with cuts focused not just on logistics but on Alexa and internal cloud tools — precisely the kinds of projects that would have once qualified as immediately deductible R&D. Salesforce (CRM) eliminated 10% of its staff, or 8,000 people, including entire product teams.In public, companies blamed bloat and AI. But inside boardrooms, spreadsheets were telling a quieter story. And MD&A notes — management’s notes on the numbers — buried deep in 10-K filings recorded the change, too. R&D had become more expensive to carry. Headcount, the leading R&D expense across the tech industry, was the easiest thing to cut.AdvertisementIn its 2023 annual report, Meta described salaries as its single biggest R&D expense. Between the first and second years that the Section 174 change began affecting tax returns, Meta cut its total workforce by almost 25%. Over the same period, Microsoft reduced its global headcount by about 7%, with cuts concentrated in product-facing, engineering-heavy roles.Smaller companies without the fortress-like balance sheets of Big Tech have arguably been hit even harder. Twilio (TWLO) slashed 22% of its workforce in 2023 alone. Shopify (SHOP) (headquartered in Canada but with much of its R&D teams in the U.S.) cut almost 30% of staff in 2022 and 2023. Coinbase (COIN) reduced headcount by 36% across a pair of brutal restructuring waves.Since going into effect, the provision has hit at the very heart of America’s economic growth engine: the tech sector.By market cap, tech giants dominate the S&P 500, with the “Magnificent 7” alone accounting for more than a third of the index’s total value. Workforce numbers tell a similar story, with tech employing millions of Americans directly and supporting the employment of tens of millions more. As measured by GDP, capital-T tech contributes about 10% of national output.AdvertisementIt’s not just that tech layoffs were large, it’s that they were massively disproportionate. Across the broader U.S. economy, job cuts hovered around in low single digits across most sectors. But in tech, entire divisions vanished, with a whopping 60% jump in layoffs between 2022 and 2023. Some cuts reflected real inefficiencies — a response to over-hiring during the zero-interest rate boom. At the same time, many of the roles eliminated were in R&D, product, and engineering, precisely the kind of functions that had once benefitted from generous tax treatment under Section 174.Throughout the 2010s, a broad swath of startups, direct-to-consumer brands, and internet-first firms — basically every company you recognize from Instagram or Facebook ads — built their growth models around a kind of engineered break-even.The tax code allowed them to spend aggressively on product and engineering, then write it all off as R&D, keeping their taxable income close to zero by design. It worked because taxable income and actual cash flow were often notGAAP accounting practices. Basically, as long as spending counted as R&D, companies could report losses to investors while owing almost nothing to the IRS.But the Section 174 change broke that model. Once those same expenses had to be spread out, or amortized, over multiple years, the tax shield vanished. Companies that were still burning cash suddenly looked profitable on paper, triggering real tax bills on imaginary gains.AdvertisementThe logic that once fueled a generation of digital-first growth collapsed overnight.So it wasn’t just tech experiencing effects. From 1954 until 2022, the U.S. tax code had encouraged businesses of all stripes to behave like tech companies. From retail to logistics, healthcare to media, if firms built internal tools, customized a software stack, or invested in business intelligence and data-driven product development, they could expense those costs. The write-off incentivized in-house builds and fast growth well outside the capital-T tech sector. This lines up with OECD research showing that immediate deductions foster innovation more than spread-out ones.And American companies ran with that logic. According to government data, U.S. businesses reported about $500 billion in R&D expenditures in 2019 alone, and almost half of that came from industries outside traditional tech. The Bureau of Economic Analysis estimates that this sector, the broader digital economy, accounts for another 10% of GDP.Add that to core tech’s contribution, and the Section 174 shift has likely touched at least 20% of the U.S. economy.AdvertisementThe result? A tax policy aimed at raising short-term revenue effectively hid a time bomb inside the growth engines of thousands of companies. And when it detonated, it kneecapped the incentive for hiring American engineers or investing in American-made tech and digital products.It made building tech companies in America look irrational on a spreadsheet.A bipartisan group of lawmakers is pushing to repeal the Section 174 change, with business groups, CFOs, crypto executives, and venture capitalists lobbying hard for retroactive relief. But the politics are messy. Fixing 174 would mean handing a tax break to the same companies many voters in both parties see as symbols of corporate excess. Any repeal would also come too late for the hundreds of thousands of workers already laid off.And of course, the losses don’t stop at Meta’s or Google’s campus gates. They ripple out. When high-paid tech workers disappear, so do the lunch orders. The house tours. The contract gigs. The spending habits that sustain entire urban economies and thousands of other jobs. Sandwich artists. Rideshare drivers. Realtors. Personal trainers. House cleaners. In tech-heavy cities, the fallout runs deep — and it’s still unfolding.AdvertisementWashington is now poised to pass a second Trump tax bill — one packed with more obscure provisions, more delayed impacts, more quiet redistribution. And it comes as analysts are only just beginning to understand the real-world effects of the last round.The Section 174 change “significantly increased the tax burden on companies investing in innovation, potentially stifling economic growth and reducing the United States’ competitiveness on the global stage,” according to the tax consulting firm KBKG. Whether the U.S. will reverse course — or simply adapt to a new normal — remains to be seen.
    Like
    Love
    Wow
    Sad
    Angry
    368
    0 Reacties 0 aandelen
  • Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design

    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products.
    In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design.
    Transparent Spatial Design
    Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments.
    Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment.
    1. Expands Perception of Space
    Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated.
    This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment.

    Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house.

    This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart.
    2. Enhances the Feeling of Openness
    One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers.
    This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles.

    The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment.

    Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature.
    3. Encourages Interaction
    Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility.
    Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment.

    The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape.

    A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality.
    Transparent Product Design
    In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience.
    The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function.
    1. Reveals Functionality
    Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function.

    Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration.

    While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works.
    2. Enhances User Engagement
    When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability.
    Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction.

    The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible.

    As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide.
    3. Celebrates Aesthetic Engineering
    Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form.
    Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment.

    DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics.

    The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion.
    Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design.
    #transparent #design #how #seethrough #materials
    Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design
    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products. In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design. Transparent Spatial Design Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments. Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment. 1. Expands Perception of Space Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated. This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment. Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house. This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart. 2. Enhances the Feeling of Openness One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers. This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles. The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment. Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature. 3. Encourages Interaction Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility. Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment. The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape. A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality. Transparent Product Design In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience. The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function. 1. Reveals Functionality Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function. Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration. While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works. 2. Enhances User Engagement When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability. Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction. The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible. As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide. 3. Celebrates Aesthetic Engineering Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form. Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment. DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics. The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion. Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design. #transparent #design #how #seethrough #materials
    WWW.YANKODESIGN.COM
    Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design
    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products. In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design. Transparent Spatial Design Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments. Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment. 1. Expands Perception of Space Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated. This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment. Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house. This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart. 2. Enhances the Feeling of Openness One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers. This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles. The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment. Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature. 3. Encourages Interaction Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility. Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment. The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape. A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality. Transparent Product Design In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience. The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function. 1. Reveals Functionality Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function. Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration. While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works. 2. Enhances User Engagement When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability. Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction. The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible. As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide. 3. Celebrates Aesthetic Engineering Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form. Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment. DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics. The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion. Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design.
    0 Reacties 0 aandelen
  • ‘Tate’s Bake Shop Cookbook’ Is a Pleasant Throwback to a Simpler Age

    We may earn a commission from links on this page.Welcome to “Cookbook of the Week.” This is a series where I highlight cookbooks that are unique, easy to use, or just special to me. While finding a particular recipe online serves a quick purpose, flipping through a truly excellent cookbook has a magic all its own. My cookbook of the week is often a hot new release, unless I decide to spotlight one that has been out for a few years. But I haven’t done a real throwback cookbook in a while. My very first cookbook of the week was Hershey’s Best-Loved Recipes, and while not quite as old, this week’s selection has been my trusted companion for quite some time. This week I chose to highlight Tate’s Bake Shop Cookbook not only because it is packed withrecipes for fabulous sweet treats, but because it always offers a nice break from the annoyances of modern internet baking.A bit about the bookTate’s Bake Shop is an actual bakery in the Hamptons on Long Island. It’s a small shop with creaky wooden floors and a warm atmosphere—at least that’s how I remember it from when I worked in Bridgehampton for a summer. I would occasionally pop in and grab some cookies, but this was long before I realized they were the Tate’s Cookies—before their green bags started popping up in every grocery store cookie aisle. You may have tried the crispy, flat cookies Tate’s is now famous for, but did you know that they make more than cookies?This cookbook is from the founder of Tate’s Bake Shop, Kathleen King. It turns out she makes a heck of a cookie...and a heck of a pie, and scone, and blueberry buckle. I love Tate's Bake Shop Cookbook because it’s filled with reliable, classic bakes. The entire Tate's brand is built on homemade, cozy, old-fashioned vibes, and that’s what you'll find in the pages of this cookbook. There’s nothing flashy about it. It’s not striving to be a part of your coffee table decor. The recipes are mostly one-pagers with short head notes and simple text, and you’ll only find pictures in the center section. This is a cookbook that’s meant to be dog-eared, annotated, used by your kids, and accidentally splattered with flour—a cookbook made to be loved.A great cookbook for a spoon and a bowlWhile I’ve owned this cookbook for nearly 15 years, I haven’t cracked it open in a while. I meandered through the recipes and marked some titles that caught my eye, or that I remembered being tasty. As I read through the short directions, I noticed some trends: most of the recipes are mixed by hand, several recipes are from family or friends, and King uses salted butter without a care in the world for anyone else's opinion. Seeing a cookbook, especially a baking cookbook, filled with short, easy to follow recipes is a breath of fresh air. Recipes that don’t require the use of an electric mixer are almost too good to be true. But here it is, each recipe is enticing in its simplicity: Sour Cream Pound Cake, Chocolate Jumbles, Sticky Toffee Date Pudding, and the recipe for the famous chocolate chip cookie that you know from the store. Reading these recipes feels almost soothing. Dramatic, I know. But I often feel like social media recipes and newer cookbooks are throwing everything at me at once to catch my attention. This cookbook seems less an attempt at impressing readers with being on trend or shocking us with new flavor combinations, and more like a collection of personal favorite recipes from your hometown baker. Baking from this cookbook feels like pastry meditation. No need to plug in an appliance or pause a YouTube video. Grab a bowl and a wooden spoon and take a moment to make something delicious. It’s great for a beginner baker, or anyone who enjoys baking in theory but hates dirtying too many bowls, or when recipes get complicated.The dish I baked this week

    Credit: Allie Chanthorn Reinmann

    I love a cookie, but we already know how good the Tate’s cookie is, so I wanted to showcase something else. Luckily, blueberry season is here, and that made my decision for me. I settled on the Blueberry Buckle. Without taking a picture of the actual recipe, I want to illustrate the simplicity of this buckle: The instructions for the whole cake, with a crumb topping, are completed in 12 lines. The headnote includes a three-sentence story about how it won a bake-off in Maine, and how King’s niece improved the crumb texture. If you’ve ever just wanted a recipe to cut to the chase, this is it.A buckle is a cake-like treat with a crumb topping and fresh fruit mixed into it.The cake batter is easy to stir together by hand. Employing salted butter eliminates worrying about measuring yet another ingredient, and all of the other ingredients were readily available in my pantry. In roughly 15 minutes, I was ready to throw an entire cake into the oven. I don’t know that I’ve had a buckle before, but I definitely would have voted for this to win that bake-off. The cake component is utterly tender, and I don’t really know why or how—there’s no sour cream or buttermilk involved. It must just be a perfect balance of tenderizing fat and strengthening gluten. The ratio of blueberries to cake is also perfect. I know folks are always begging for more berries, but if you have too many then the berries sink or they make the cake too wet. The crumb topping is exactly as it should be—sweet, buttery, and lightly spiced. It’s good enough to eat on its own. I could see myself making this buckle for a picnic, or a friend’s summer birthday brunch. June is just around the corner, so I'll keep my copy of Tate’s Bake Shop Cookbook handy for other berry-centric bakes this summer. How to buy itDespite being an older book, it’s still available in the hardcover. However, if you’re keen to save a buck, do check out your local used bookstores. Older books like this are almost always available used for a fraction of the original retail price. If you’re more of a digital baker, you can also spend less and download the ebook. 

    Tate's Bake Shop Cookbook: The Best Recipes from Southampton's Favorite Bakery for Homestyle Cookies, Cakes, Pies, Muffins, and Breads

    Shop Now

    Shop Now
    #tates #bake #shop #cookbook #pleasant
    ‘Tate’s Bake Shop Cookbook’ Is a Pleasant Throwback to a Simpler Age
    We may earn a commission from links on this page.Welcome to “Cookbook of the Week.” This is a series where I highlight cookbooks that are unique, easy to use, or just special to me. While finding a particular recipe online serves a quick purpose, flipping through a truly excellent cookbook has a magic all its own. My cookbook of the week is often a hot new release, unless I decide to spotlight one that has been out for a few years. But I haven’t done a real throwback cookbook in a while. My very first cookbook of the week was Hershey’s Best-Loved Recipes, and while not quite as old, this week’s selection has been my trusted companion for quite some time. This week I chose to highlight Tate’s Bake Shop Cookbook not only because it is packed withrecipes for fabulous sweet treats, but because it always offers a nice break from the annoyances of modern internet baking.A bit about the bookTate’s Bake Shop is an actual bakery in the Hamptons on Long Island. It’s a small shop with creaky wooden floors and a warm atmosphere—at least that’s how I remember it from when I worked in Bridgehampton for a summer. I would occasionally pop in and grab some cookies, but this was long before I realized they were the Tate’s Cookies—before their green bags started popping up in every grocery store cookie aisle. You may have tried the crispy, flat cookies Tate’s is now famous for, but did you know that they make more than cookies?This cookbook is from the founder of Tate’s Bake Shop, Kathleen King. It turns out she makes a heck of a cookie...and a heck of a pie, and scone, and blueberry buckle. I love Tate's Bake Shop Cookbook because it’s filled with reliable, classic bakes. The entire Tate's brand is built on homemade, cozy, old-fashioned vibes, and that’s what you'll find in the pages of this cookbook. There’s nothing flashy about it. It’s not striving to be a part of your coffee table decor. The recipes are mostly one-pagers with short head notes and simple text, and you’ll only find pictures in the center section. This is a cookbook that’s meant to be dog-eared, annotated, used by your kids, and accidentally splattered with flour—a cookbook made to be loved.A great cookbook for a spoon and a bowlWhile I’ve owned this cookbook for nearly 15 years, I haven’t cracked it open in a while. I meandered through the recipes and marked some titles that caught my eye, or that I remembered being tasty. As I read through the short directions, I noticed some trends: most of the recipes are mixed by hand, several recipes are from family or friends, and King uses salted butter without a care in the world for anyone else's opinion. Seeing a cookbook, especially a baking cookbook, filled with short, easy to follow recipes is a breath of fresh air. Recipes that don’t require the use of an electric mixer are almost too good to be true. But here it is, each recipe is enticing in its simplicity: Sour Cream Pound Cake, Chocolate Jumbles, Sticky Toffee Date Pudding, and the recipe for the famous chocolate chip cookie that you know from the store. Reading these recipes feels almost soothing. Dramatic, I know. But I often feel like social media recipes and newer cookbooks are throwing everything at me at once to catch my attention. This cookbook seems less an attempt at impressing readers with being on trend or shocking us with new flavor combinations, and more like a collection of personal favorite recipes from your hometown baker. Baking from this cookbook feels like pastry meditation. No need to plug in an appliance or pause a YouTube video. Grab a bowl and a wooden spoon and take a moment to make something delicious. It’s great for a beginner baker, or anyone who enjoys baking in theory but hates dirtying too many bowls, or when recipes get complicated.The dish I baked this week Credit: Allie Chanthorn Reinmann I love a cookie, but we already know how good the Tate’s cookie is, so I wanted to showcase something else. Luckily, blueberry season is here, and that made my decision for me. I settled on the Blueberry Buckle. Without taking a picture of the actual recipe, I want to illustrate the simplicity of this buckle: The instructions for the whole cake, with a crumb topping, are completed in 12 lines. The headnote includes a three-sentence story about how it won a bake-off in Maine, and how King’s niece improved the crumb texture. If you’ve ever just wanted a recipe to cut to the chase, this is it.A buckle is a cake-like treat with a crumb topping and fresh fruit mixed into it.The cake batter is easy to stir together by hand. Employing salted butter eliminates worrying about measuring yet another ingredient, and all of the other ingredients were readily available in my pantry. In roughly 15 minutes, I was ready to throw an entire cake into the oven. I don’t know that I’ve had a buckle before, but I definitely would have voted for this to win that bake-off. The cake component is utterly tender, and I don’t really know why or how—there’s no sour cream or buttermilk involved. It must just be a perfect balance of tenderizing fat and strengthening gluten. The ratio of blueberries to cake is also perfect. I know folks are always begging for more berries, but if you have too many then the berries sink or they make the cake too wet. The crumb topping is exactly as it should be—sweet, buttery, and lightly spiced. It’s good enough to eat on its own. I could see myself making this buckle for a picnic, or a friend’s summer birthday brunch. June is just around the corner, so I'll keep my copy of Tate’s Bake Shop Cookbook handy for other berry-centric bakes this summer. How to buy itDespite being an older book, it’s still available in the hardcover. However, if you’re keen to save a buck, do check out your local used bookstores. Older books like this are almost always available used for a fraction of the original retail price. If you’re more of a digital baker, you can also spend less and download the ebook.  Tate's Bake Shop Cookbook: The Best Recipes from Southampton's Favorite Bakery for Homestyle Cookies, Cakes, Pies, Muffins, and Breads Shop Now Shop Now #tates #bake #shop #cookbook #pleasant
    LIFEHACKER.COM
    ‘Tate’s Bake Shop Cookbook’ Is a Pleasant Throwback to a Simpler Age
    We may earn a commission from links on this page.Welcome to “Cookbook of the Week.” This is a series where I highlight cookbooks that are unique, easy to use, or just special to me. While finding a particular recipe online serves a quick purpose, flipping through a truly excellent cookbook has a magic all its own. My cookbook of the week is often a hot new release, unless I decide to spotlight one that has been out for a few years. But I haven’t done a real throwback cookbook in a while. My very first cookbook of the week was Hershey’s Best-Loved Recipes, and while not quite as old, this week’s selection has been my trusted companion for quite some time. This week I chose to highlight Tate’s Bake Shop Cookbook not only because it is packed withrecipes for fabulous sweet treats, but because it always offers a nice break from the annoyances of modern internet baking.A bit about the bookTate’s Bake Shop is an actual bakery in the Hamptons on Long Island. It’s a small shop with creaky wooden floors and a warm atmosphere—at least that’s how I remember it from when I worked in Bridgehampton for a summer. I would occasionally pop in and grab some cookies, but this was long before I realized they were the Tate’s Cookies—before their green bags started popping up in every grocery store cookie aisle. You may have tried the crispy, flat cookies Tate’s is now famous for, but did you know that they make more than cookies?This cookbook is from the founder of Tate’s Bake Shop, Kathleen King. It turns out she makes a heck of a cookie...and a heck of a pie, and scone, and blueberry buckle. I love Tate's Bake Shop Cookbook because it’s filled with reliable, classic bakes. The entire Tate's brand is built on homemade, cozy, old-fashioned vibes, and that’s what you'll find in the pages of this cookbook. There’s nothing flashy about it. It’s not striving to be a part of your coffee table decor. The recipes are mostly one-pagers with short head notes and simple text, and you’ll only find pictures in the center section. This is a cookbook that’s meant to be dog-eared, annotated, used by your kids, and accidentally splattered with flour—a cookbook made to be loved.A great cookbook for a spoon and a bowlWhile I’ve owned this cookbook for nearly 15 years, I haven’t cracked it open in a while. I meandered through the recipes and marked some titles that caught my eye, or that I remembered being tasty. As I read through the short directions, I noticed some trends: most of the recipes are mixed by hand, several recipes are from family or friends, and King uses salted butter without a care in the world for anyone else's opinion. Seeing a cookbook, especially a baking cookbook, filled with short, easy to follow recipes is a breath of fresh air. Recipes that don’t require the use of an electric mixer are almost too good to be true. But here it is, each recipe is enticing in its simplicity: Sour Cream Pound Cake, Chocolate Jumbles, Sticky Toffee Date Pudding, and the recipe for the famous chocolate chip cookie that you know from the store. Reading these recipes feels almost soothing. Dramatic, I know. But I often feel like social media recipes and newer cookbooks are throwing everything at me at once to catch my attention. This cookbook seems less an attempt at impressing readers with being on trend or shocking us with new flavor combinations, and more like a collection of personal favorite recipes from your hometown baker. Baking from this cookbook feels like pastry meditation. No need to plug in an appliance or pause a YouTube video. Grab a bowl and a wooden spoon and take a moment to make something delicious. It’s great for a beginner baker, or anyone who enjoys baking in theory but hates dirtying too many bowls, or when recipes get complicated.The dish I baked this week Credit: Allie Chanthorn Reinmann I love a cookie, but we already know how good the Tate’s cookie is, so I wanted to showcase something else. Luckily, blueberry season is here, and that made my decision for me. I settled on the Blueberry Buckle. Without taking a picture of the actual recipe (which isn’t cool to do), I want to illustrate the simplicity of this buckle: The instructions for the whole cake, with a crumb topping, are completed in 12 lines. The headnote includes a three-sentence story about how it won a bake-off in Maine, and how King’s niece improved the crumb texture. If you’ve ever just wanted a recipe to cut to the chase, this is it.A buckle is a cake-like treat with a crumb topping and fresh fruit mixed into it. (Between buckles, betties, cobblers, and crisps, it’s easy to get confused.) The cake batter is easy to stir together by hand. Employing salted butter eliminates worrying about measuring yet another ingredient, and all of the other ingredients were readily available in my pantry. In roughly 15 minutes, I was ready to throw an entire cake into the oven. I don’t know that I’ve had a buckle before, but I definitely would have voted for this to win that bake-off. The cake component is utterly tender, and I don’t really know why or how—there’s no sour cream or buttermilk involved. It must just be a perfect balance of tenderizing fat and strengthening gluten. The ratio of blueberries to cake is also perfect. I know folks are always begging for more berries, but if you have too many then the berries sink or they make the cake too wet. The crumb topping is exactly as it should be—sweet, buttery, and lightly spiced. It’s good enough to eat on its own. I could see myself making this buckle for a picnic, or a friend’s summer birthday brunch. June is just around the corner, so I'll keep my copy of Tate’s Bake Shop Cookbook handy for other berry-centric bakes this summer. How to buy itDespite being an older book, it’s still available in the hardcover. However, if you’re keen to save a buck, do check out your local used bookstores. Older books like this are almost always available used for a fraction of the original retail price. If you’re more of a digital baker, you can also spend less and download the ebook.  Tate's Bake Shop Cookbook: The Best Recipes from Southampton's Favorite Bakery for Homestyle Cookies, Cakes, Pies, Muffins, and Breads $31.94 at Amazon Shop Now Shop Now $31.94 at Amazon
    0 Reacties 0 aandelen
  • These stories could change how you feel about AI

    Here’s a selection of recent headlines about artificial intelligence, picked more or less at random:For some recent graduates, the AI job apocalypse may already be hereArtificial intelligence threatens to raid the water reserves of Europe’s driest regionsTop AI CEO foresees white-collar bloodbathOkay, not exactly at random — I did look for more doomy-sounding headlines. But they weren’t hard to find. That’s because numerous studies indicate that negative or fear-framed coverage of AI in mainstream media tends to outnumber positive framings. And to be clear, there are good reasons for that! From disinformation to cyberwarfare to autonomous weapons to massive job loss to the actual, flat-out end of the world, there are a lot of things that could go very, very wrong with AI. But as in so many other areas, the emphasis on the negative in artificial intelligence risks overshadowing what could go right — both in the future as this technology continues to develop and right now. As a corrective, here’s a roundup of one way in which AI is already making a positive difference in three important fields.ScienceWhenever anyone asks me about an unquestionably good use of AI, I point to one thing: AlphaFold. After all, how many other AI models have won their creators an actual Nobel Prize? AlphaFold, which was developed by the Google-owned AI company DeepMind, is an AI model that predicts the 3D structures of proteins based solely on their amino acid sequences. That’s important because scientists need to predict the shape of protein to better understand how it might function and how it might be used in products like drugs. That’s known as the “protein-folding problem” — and it was a problem because while human researchers could eventually figure out the structure of a protein, it would often take them years of laborious work in the lab to do so. AlphaFold, through machine-learning methods I couldn’t explain to you if I tried, can make predictions in as little as five seconds, with accuracy that is almost as good as gold-standard experimental methods. By speeding up a basic part of biomedical research, AlphaFold has already managed to meaningfully accelerate drug development in everything from Huntington’s disease to antibiotic resistance. And Google DeepMind’s decision last year to open source AlphaFold3, its most advanced model, for non-commercial academic use has greatly expanded the number of researchers who can take advantage of it.MedicineYou wouldn’t know it from watching medical dramas like The Pitt, but doctors spend a lot of time doing paperwork — two hours of it for every one hour they actually spend with a patient, by one count. Finding a way to cut down that time could free up doctors to do actual medicine and help stem the problem of burnout. That’s where AI is already making a difference. As the Wall Street Journal reported this week, health care systems across the country are employing “AI scribes” — systems that automatically capture doctor-patient discussions, update medical records, and generally automate as much as possible around the documentation of a medical interaction. In one pilot study employing AI scribes from Microsoft and a startup called Abridge, doctors cut back daily documentation time from 90 minutes to under 30 minutes. Not only do ambient-listening AI products free doctors from much of the need to make manual notes, but they can eventually connect new data from a doctor-patient interaction with existing medical records and ensure connections and insights on care don’t fall between the cracks. “I see it being able to provide insights about the patient that the human mind just can’t do in a reasonable time,” Dr. Lance Owens, regional chief medical information officer at University of Michigan Health, told the Journal.ClimateA timely warning about a natural disaster can mean the difference between life and death, especially in already vulnerable poor countries. That is why Google Flood Hub is so important.An open-access, AI-driven river-flood early warning system, Flood Hub provides seven-day flood forecasts for 700 million people in 100 countries. It works by marrying a global hydrology model that can forecast river levels even in basins that lack physical flood gauges with an inundation model that converts those predicted levels into high-resolution flood maps. This allows villagers to see exactly what roads or fields might end up underwater. Flood Hub, to my mind, is one of the clearest examples of how AI can be used for good for those who need it most. Though many rich countries like the US are included in Flood Hub, they mostly already have infrastructure in place to forecast the effects of extreme weather.But many poor countries lack those capabilities. AI’s ability to drastically reduce the labor and cost of such forecasts has made it possible to extend those lifesaving capabilities to those who need it most.One more cool thing: The NGO GiveDirectly — which provides direct cash payments to the global poor — has experimented with using Flood Hub warnings to send families hundreds of dollars in cash aid days before an expected flood to help themselves prepare for the worst. As the threat of extreme weather grows, thanks to climate change and population movement, this is the kind of cutting-edge philanthropy.AI for goodEven what seems to be the best applications for AI can come with their drawbacks. The same kind of AI technology that allows AlphaFold to help speed drug development could conceivably be used one day to more rapidly design bioweapons. AI scribes in medicine raise questions about patient confidentiality and the risk of hacking. And while it’s hard to find fault in an AI system that can help warn poor people about natural disasters, the lack of access to the internet in the poorest countries can limit the value of those warnings — and there’s not much AI can do to change that.But with the headlines around AI leaning so apocalyptic, it’s easy to overlook the tangible benefits AI already delivers. Ultimately AI is a tool. A powerful tool, but a tool nonetheless. And like any tool, what it will do — bad and good — will be determined by how we use it.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    #these #stories #could #change #how
    These stories could change how you feel about AI
    Here’s a selection of recent headlines about artificial intelligence, picked more or less at random:For some recent graduates, the AI job apocalypse may already be hereArtificial intelligence threatens to raid the water reserves of Europe’s driest regionsTop AI CEO foresees white-collar bloodbathOkay, not exactly at random — I did look for more doomy-sounding headlines. But they weren’t hard to find. That’s because numerous studies indicate that negative or fear-framed coverage of AI in mainstream media tends to outnumber positive framings. And to be clear, there are good reasons for that! From disinformation to cyberwarfare to autonomous weapons to massive job loss to the actual, flat-out end of the world, there are a lot of things that could go very, very wrong with AI. But as in so many other areas, the emphasis on the negative in artificial intelligence risks overshadowing what could go right — both in the future as this technology continues to develop and right now. As a corrective, here’s a roundup of one way in which AI is already making a positive difference in three important fields.ScienceWhenever anyone asks me about an unquestionably good use of AI, I point to one thing: AlphaFold. After all, how many other AI models have won their creators an actual Nobel Prize? AlphaFold, which was developed by the Google-owned AI company DeepMind, is an AI model that predicts the 3D structures of proteins based solely on their amino acid sequences. That’s important because scientists need to predict the shape of protein to better understand how it might function and how it might be used in products like drugs. That’s known as the “protein-folding problem” — and it was a problem because while human researchers could eventually figure out the structure of a protein, it would often take them years of laborious work in the lab to do so. AlphaFold, through machine-learning methods I couldn’t explain to you if I tried, can make predictions in as little as five seconds, with accuracy that is almost as good as gold-standard experimental methods. By speeding up a basic part of biomedical research, AlphaFold has already managed to meaningfully accelerate drug development in everything from Huntington’s disease to antibiotic resistance. And Google DeepMind’s decision last year to open source AlphaFold3, its most advanced model, for non-commercial academic use has greatly expanded the number of researchers who can take advantage of it.MedicineYou wouldn’t know it from watching medical dramas like The Pitt, but doctors spend a lot of time doing paperwork — two hours of it for every one hour they actually spend with a patient, by one count. Finding a way to cut down that time could free up doctors to do actual medicine and help stem the problem of burnout. That’s where AI is already making a difference. As the Wall Street Journal reported this week, health care systems across the country are employing “AI scribes” — systems that automatically capture doctor-patient discussions, update medical records, and generally automate as much as possible around the documentation of a medical interaction. In one pilot study employing AI scribes from Microsoft and a startup called Abridge, doctors cut back daily documentation time from 90 minutes to under 30 minutes. Not only do ambient-listening AI products free doctors from much of the need to make manual notes, but they can eventually connect new data from a doctor-patient interaction with existing medical records and ensure connections and insights on care don’t fall between the cracks. “I see it being able to provide insights about the patient that the human mind just can’t do in a reasonable time,” Dr. Lance Owens, regional chief medical information officer at University of Michigan Health, told the Journal.ClimateA timely warning about a natural disaster can mean the difference between life and death, especially in already vulnerable poor countries. That is why Google Flood Hub is so important.An open-access, AI-driven river-flood early warning system, Flood Hub provides seven-day flood forecasts for 700 million people in 100 countries. It works by marrying a global hydrology model that can forecast river levels even in basins that lack physical flood gauges with an inundation model that converts those predicted levels into high-resolution flood maps. This allows villagers to see exactly what roads or fields might end up underwater. Flood Hub, to my mind, is one of the clearest examples of how AI can be used for good for those who need it most. Though many rich countries like the US are included in Flood Hub, they mostly already have infrastructure in place to forecast the effects of extreme weather.But many poor countries lack those capabilities. AI’s ability to drastically reduce the labor and cost of such forecasts has made it possible to extend those lifesaving capabilities to those who need it most.One more cool thing: The NGO GiveDirectly — which provides direct cash payments to the global poor — has experimented with using Flood Hub warnings to send families hundreds of dollars in cash aid days before an expected flood to help themselves prepare for the worst. As the threat of extreme weather grows, thanks to climate change and population movement, this is the kind of cutting-edge philanthropy.AI for goodEven what seems to be the best applications for AI can come with their drawbacks. The same kind of AI technology that allows AlphaFold to help speed drug development could conceivably be used one day to more rapidly design bioweapons. AI scribes in medicine raise questions about patient confidentiality and the risk of hacking. And while it’s hard to find fault in an AI system that can help warn poor people about natural disasters, the lack of access to the internet in the poorest countries can limit the value of those warnings — and there’s not much AI can do to change that.But with the headlines around AI leaning so apocalyptic, it’s easy to overlook the tangible benefits AI already delivers. Ultimately AI is a tool. A powerful tool, but a tool nonetheless. And like any tool, what it will do — bad and good — will be determined by how we use it.A version of this story originally appeared in the Good News newsletter. Sign up here!See More: #these #stories #could #change #how
    WWW.VOX.COM
    These stories could change how you feel about AI
    Here’s a selection of recent headlines about artificial intelligence, picked more or less at random:For some recent graduates, the AI job apocalypse may already be hereArtificial intelligence threatens to raid the water reserves of Europe’s driest regionsTop AI CEO foresees white-collar bloodbathOkay, not exactly at random — I did look for more doomy-sounding headlines. But they weren’t hard to find. That’s because numerous studies indicate that negative or fear-framed coverage of AI in mainstream media tends to outnumber positive framings. And to be clear, there are good reasons for that! From disinformation to cyberwarfare to autonomous weapons to massive job loss to the actual, flat-out end of the world (shameless plug of my book here), there are a lot of things that could go very, very wrong with AI. But as in so many other areas, the emphasis on the negative in artificial intelligence risks overshadowing what could go right — both in the future as this technology continues to develop and right now. As a corrective (and maybe just to ingratiate myself to our potential future robot overlords), here’s a roundup of one way in which AI is already making a positive difference in three important fields.ScienceWhenever anyone asks me about an unquestionably good use of AI, I point to one thing: AlphaFold. After all, how many other AI models have won their creators an actual Nobel Prize? AlphaFold, which was developed by the Google-owned AI company DeepMind, is an AI model that predicts the 3D structures of proteins based solely on their amino acid sequences. That’s important because scientists need to predict the shape of protein to better understand how it might function and how it might be used in products like drugs. That’s known as the “protein-folding problem” — and it was a problem because while human researchers could eventually figure out the structure of a protein, it would often take them years of laborious work in the lab to do so. AlphaFold, through machine-learning methods I couldn’t explain to you if I tried, can make predictions in as little as five seconds, with accuracy that is almost as good as gold-standard experimental methods. By speeding up a basic part of biomedical research, AlphaFold has already managed to meaningfully accelerate drug development in everything from Huntington’s disease to antibiotic resistance. And Google DeepMind’s decision last year to open source AlphaFold3, its most advanced model, for non-commercial academic use has greatly expanded the number of researchers who can take advantage of it.MedicineYou wouldn’t know it from watching medical dramas like The Pitt, but doctors spend a lot of time doing paperwork — two hours of it for every one hour they actually spend with a patient, by one count. Finding a way to cut down that time could free up doctors to do actual medicine and help stem the problem of burnout. That’s where AI is already making a difference. As the Wall Street Journal reported this week, health care systems across the country are employing “AI scribes” — systems that automatically capture doctor-patient discussions, update medical records, and generally automate as much as possible around the documentation of a medical interaction. In one pilot study employing AI scribes from Microsoft and a startup called Abridge, doctors cut back daily documentation time from 90 minutes to under 30 minutes. Not only do ambient-listening AI products free doctors from much of the need to make manual notes, but they can eventually connect new data from a doctor-patient interaction with existing medical records and ensure connections and insights on care don’t fall between the cracks. “I see it being able to provide insights about the patient that the human mind just can’t do in a reasonable time,” Dr. Lance Owens, regional chief medical information officer at University of Michigan Health, told the Journal.ClimateA timely warning about a natural disaster can mean the difference between life and death, especially in already vulnerable poor countries. That is why Google Flood Hub is so important.An open-access, AI-driven river-flood early warning system, Flood Hub provides seven-day flood forecasts for 700 million people in 100 countries. It works by marrying a global hydrology model that can forecast river levels even in basins that lack physical flood gauges with an inundation model that converts those predicted levels into high-resolution flood maps. This allows villagers to see exactly what roads or fields might end up underwater. Flood Hub, to my mind, is one of the clearest examples of how AI can be used for good for those who need it most. Though many rich countries like the US are included in Flood Hub, they mostly already have infrastructure in place to forecast the effects of extreme weather. (Unless, of course, we cut it all from the budget.) But many poor countries lack those capabilities. AI’s ability to drastically reduce the labor and cost of such forecasts has made it possible to extend those lifesaving capabilities to those who need it most.One more cool thing: The NGO GiveDirectly — which provides direct cash payments to the global poor — has experimented with using Flood Hub warnings to send families hundreds of dollars in cash aid days before an expected flood to help themselves prepare for the worst. As the threat of extreme weather grows, thanks to climate change and population movement, this is the kind of cutting-edge philanthropy.AI for goodEven what seems to be the best applications for AI can come with their drawbacks. The same kind of AI technology that allows AlphaFold to help speed drug development could conceivably be used one day to more rapidly design bioweapons. AI scribes in medicine raise questions about patient confidentiality and the risk of hacking. And while it’s hard to find fault in an AI system that can help warn poor people about natural disasters, the lack of access to the internet in the poorest countries can limit the value of those warnings — and there’s not much AI can do to change that.But with the headlines around AI leaning so apocalyptic, it’s easy to overlook the tangible benefits AI already delivers. Ultimately AI is a tool. A powerful tool, but a tool nonetheless. And like any tool, what it will do — bad and good — will be determined by how we use it.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    0 Reacties 0 aandelen
  • New EDDIESTEALER Malware Bypasses Chrome's App-Bound Encryption to Steal Browser Data

    May 30, 2025Ravie LakshmananBrowser Security / Malware

    A new malware campaign is distributing a novel Rust-based information stealer dubbed EDDIESTEALER using the popular ClickFix social engineering tactic initiated via fake CAPTCHA verification pages.
    "This campaign leverages deceptive CAPTCHA verification pages that trick users into executing a malicious PowerShell script, which ultimately deploys the infostealer, harvesting sensitive data such as credentials, browser information, and cryptocurrency wallet details," Elastic Security Labs researcher Jia Yu Chan said in an analysis.
    The attack chains begin with threat actors compromising legitimate websites with malicious JavaScript payloads that serve bogus CAPTCHA check pages, which prompt site visitors to "prove you are notrobot" by following a three-step process, a prevalent tactic called ClickFix.
    This involves instructing the potential victim to open the Windows Run dialog prompt, paste an already copied command into the "verification window", and press enter. This effectively causes the obfuscated PowerShell command to be executed, resulting in the retrieval of a next-stage payload from an external server.
    The JavaScript payloadis subsequently saved to the victim's Downloads folder and executed using cscript in a hidden window. The main goal of the intermediate script is to fetch the EDDIESTEALER binary from the same remote server and store it in the Downloads folder with a pseudorandom 12-character file name.
    Written in Rust, EDDIESTEALER is a commodity stealer malware that can gather system metadata, receive tasks from a command-and-controlserver, and siphon data of interest from the infected host. The exfiltration targets include cryptocurrency wallets, web browsers, password managers, FTP clients, and messaging apps.
    "These targets are subject to change as they are configurable by the C2 operator," Elastic explained. "EDDIESTEALER then reads the targeted files using standard kernel32.dll functions like CreateFileW, GetFileSizeEx, ReadFile, and CloseHandle."

    The collected host information is encrypted and transmitted to the C2 server in a separate HTTP POST request after the completion of each task.
    Besides incorporating string encryption, the malware employs a custom WinAPI lookup mechanism for resolving API calls and creates a mutex to ensure that only one version is running at any given time. It also incorporates checks to determine if it's being executed in a sandboxed environment, and if so, deletes itself from disk.
    "Based on a similar self-deletion technique observed in Latrodectus, EDDIESTEALER is capable of deleting itself through NTFS Alternate Data Streams renaming, to bypass file locks," Elastic noted.
    Another noteworthy feature built into the stealer is its ability to bypass Chromium's app-bound encryption to gain access to unencrypted sensitive data, such as cookies. This is accomplished by including a Rust implementation of ChromeKatz, an open-source tool that can dump cookies and credentials from the memory of Chromium-based browsers.
    The Rust version of ChromeKatz also incorporates changes to handle scenarios where the targeted Chromium browser is not running. In such cases, it spawns a new browser instance using the command-line arguments "--window-position=-3000,-3000 ; effectively positioning the new window far off-screen and making its invisible to the user.

    In opening the browser, the objective is to enable the malware to read the memory associated with the network service child process of Chrome that's identified by the "-utility-sub-type=network.mojom.NetworkService" flag and ultimately extract the credentials.
    Elastic said it also identified updated versions of the malware with features to harvest running processes, GPU information, number of CPU cores, CPU name, and CPU vendor. In addition, the new variants tweak the C2 communication pattern by preemptively sending the host information to the server before receiving the task configuration.
    That's not all. The encryption key used for client-to-server communication is hard-coded into the binary, as opposed to retrieving it dynamically from the server. Furthermore, the stealer has been found to launch a new Chrome process with the --remote-debugging-port=<port_num> flag to enable DevTools Protocol over a local WebSocket interface so as to interact with the browser in a headless manner, without requiring any user interaction.
    "This adoption of Rust in malware development reflects a growing trend among threat actors seeking to leverage modern language features for enhanced stealth, stability, and resilience against traditional analysis workflows and threat detection engines," the company said.
    The disclosure comes as c/side revealed details of a ClickFix campaign that targets multiple platforms, such as Apple macOS, Android, and iOS, using techniques like browser-based redirections, fake UI prompts, and drive-by download techniques.
    The attack chain starts with an obfuscated JavaScript hosted on a website, that when visited from macOS, initiates a series of redirections to a page that guides victims to launch Terminal and run a shell script, which leads to the download of a stealer malware that has been flagged on VirusTotal as the Atomic macOS Stealer.
    However, the same campaign has been configured to initiate a drive-by download scheme when visiting the web page from an Android, iOS, or Windows device, leading to the deployment of another trojan malware.

    The disclosures coincide with the emergence of new stealer malware families like Katz Stealer and AppleProcessHub Stealer targeting Windows and macOS respectively, and are capable of harvesting a wide range of information from infected hosts, according to Nextron and Kandji.
    Katz Stealer, like EDDIESTEALER, is engineered to circumvent Chrome's app-bound encryption, but in a different way by employing DLL injection to obtain the encryption key without administrator privileges and use it to decrypt encrypted cookies and passwords from Chromium-based browsers.

    "Attackers conceal malicious JavaScript in gzip files, which, when opened, trigger the download of a PowerShell script," Nextron said. "This script retrieves a .NET-based loader payload, which injects the stealer into a legitimate process. Once active, it exfiltrates stolen data to the command and control server."
    AppleProcessHub Stealer, on the other hand, is designed to exfiltrate user files including bash history, zsh history, GitHub configurations, SSH information, and iCloud Keychain.
    Attack sequences distributing the malware entail the use of a Mach-O binary that downloads a second-stage bash stealer script from the server "appleprocesshubcom" and runs it, the results of which are then exfiltrated back to the C2 server. Details of the malware were first shared by the MalwareHunterTeam on May 15, 2025, and by MacPaw's Moonlock Lab last week.
    "This is an example of a Mach-O written in Objective-C which communicates with a command and control server to execute scripts," Kandji researcher Christopher Lopez said.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #new #eddiestealer #malware #bypasses #chrome039s
    New EDDIESTEALER Malware Bypasses Chrome's App-Bound Encryption to Steal Browser Data
    May 30, 2025Ravie LakshmananBrowser Security / Malware A new malware campaign is distributing a novel Rust-based information stealer dubbed EDDIESTEALER using the popular ClickFix social engineering tactic initiated via fake CAPTCHA verification pages. "This campaign leverages deceptive CAPTCHA verification pages that trick users into executing a malicious PowerShell script, which ultimately deploys the infostealer, harvesting sensitive data such as credentials, browser information, and cryptocurrency wallet details," Elastic Security Labs researcher Jia Yu Chan said in an analysis. The attack chains begin with threat actors compromising legitimate websites with malicious JavaScript payloads that serve bogus CAPTCHA check pages, which prompt site visitors to "prove you are notrobot" by following a three-step process, a prevalent tactic called ClickFix. This involves instructing the potential victim to open the Windows Run dialog prompt, paste an already copied command into the "verification window", and press enter. This effectively causes the obfuscated PowerShell command to be executed, resulting in the retrieval of a next-stage payload from an external server. The JavaScript payloadis subsequently saved to the victim's Downloads folder and executed using cscript in a hidden window. The main goal of the intermediate script is to fetch the EDDIESTEALER binary from the same remote server and store it in the Downloads folder with a pseudorandom 12-character file name. Written in Rust, EDDIESTEALER is a commodity stealer malware that can gather system metadata, receive tasks from a command-and-controlserver, and siphon data of interest from the infected host. The exfiltration targets include cryptocurrency wallets, web browsers, password managers, FTP clients, and messaging apps. "These targets are subject to change as they are configurable by the C2 operator," Elastic explained. "EDDIESTEALER then reads the targeted files using standard kernel32.dll functions like CreateFileW, GetFileSizeEx, ReadFile, and CloseHandle." The collected host information is encrypted and transmitted to the C2 server in a separate HTTP POST request after the completion of each task. Besides incorporating string encryption, the malware employs a custom WinAPI lookup mechanism for resolving API calls and creates a mutex to ensure that only one version is running at any given time. It also incorporates checks to determine if it's being executed in a sandboxed environment, and if so, deletes itself from disk. "Based on a similar self-deletion technique observed in Latrodectus, EDDIESTEALER is capable of deleting itself through NTFS Alternate Data Streams renaming, to bypass file locks," Elastic noted. Another noteworthy feature built into the stealer is its ability to bypass Chromium's app-bound encryption to gain access to unencrypted sensitive data, such as cookies. This is accomplished by including a Rust implementation of ChromeKatz, an open-source tool that can dump cookies and credentials from the memory of Chromium-based browsers. The Rust version of ChromeKatz also incorporates changes to handle scenarios where the targeted Chromium browser is not running. In such cases, it spawns a new browser instance using the command-line arguments "--window-position=-3000,-3000 ; effectively positioning the new window far off-screen and making its invisible to the user. In opening the browser, the objective is to enable the malware to read the memory associated with the network service child process of Chrome that's identified by the "-utility-sub-type=network.mojom.NetworkService" flag and ultimately extract the credentials. Elastic said it also identified updated versions of the malware with features to harvest running processes, GPU information, number of CPU cores, CPU name, and CPU vendor. In addition, the new variants tweak the C2 communication pattern by preemptively sending the host information to the server before receiving the task configuration. That's not all. The encryption key used for client-to-server communication is hard-coded into the binary, as opposed to retrieving it dynamically from the server. Furthermore, the stealer has been found to launch a new Chrome process with the --remote-debugging-port=<port_num> flag to enable DevTools Protocol over a local WebSocket interface so as to interact with the browser in a headless manner, without requiring any user interaction. "This adoption of Rust in malware development reflects a growing trend among threat actors seeking to leverage modern language features for enhanced stealth, stability, and resilience against traditional analysis workflows and threat detection engines," the company said. The disclosure comes as c/side revealed details of a ClickFix campaign that targets multiple platforms, such as Apple macOS, Android, and iOS, using techniques like browser-based redirections, fake UI prompts, and drive-by download techniques. The attack chain starts with an obfuscated JavaScript hosted on a website, that when visited from macOS, initiates a series of redirections to a page that guides victims to launch Terminal and run a shell script, which leads to the download of a stealer malware that has been flagged on VirusTotal as the Atomic macOS Stealer. However, the same campaign has been configured to initiate a drive-by download scheme when visiting the web page from an Android, iOS, or Windows device, leading to the deployment of another trojan malware. The disclosures coincide with the emergence of new stealer malware families like Katz Stealer and AppleProcessHub Stealer targeting Windows and macOS respectively, and are capable of harvesting a wide range of information from infected hosts, according to Nextron and Kandji. Katz Stealer, like EDDIESTEALER, is engineered to circumvent Chrome's app-bound encryption, but in a different way by employing DLL injection to obtain the encryption key without administrator privileges and use it to decrypt encrypted cookies and passwords from Chromium-based browsers. "Attackers conceal malicious JavaScript in gzip files, which, when opened, trigger the download of a PowerShell script," Nextron said. "This script retrieves a .NET-based loader payload, which injects the stealer into a legitimate process. Once active, it exfiltrates stolen data to the command and control server." AppleProcessHub Stealer, on the other hand, is designed to exfiltrate user files including bash history, zsh history, GitHub configurations, SSH information, and iCloud Keychain. Attack sequences distributing the malware entail the use of a Mach-O binary that downloads a second-stage bash stealer script from the server "appleprocesshubcom" and runs it, the results of which are then exfiltrated back to the C2 server. Details of the malware were first shared by the MalwareHunterTeam on May 15, 2025, and by MacPaw's Moonlock Lab last week. "This is an example of a Mach-O written in Objective-C which communicates with a command and control server to execute scripts," Kandji researcher Christopher Lopez said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #new #eddiestealer #malware #bypasses #chrome039s
    THEHACKERNEWS.COM
    New EDDIESTEALER Malware Bypasses Chrome's App-Bound Encryption to Steal Browser Data
    May 30, 2025Ravie LakshmananBrowser Security / Malware A new malware campaign is distributing a novel Rust-based information stealer dubbed EDDIESTEALER using the popular ClickFix social engineering tactic initiated via fake CAPTCHA verification pages. "This campaign leverages deceptive CAPTCHA verification pages that trick users into executing a malicious PowerShell script, which ultimately deploys the infostealer, harvesting sensitive data such as credentials, browser information, and cryptocurrency wallet details," Elastic Security Labs researcher Jia Yu Chan said in an analysis. The attack chains begin with threat actors compromising legitimate websites with malicious JavaScript payloads that serve bogus CAPTCHA check pages, which prompt site visitors to "prove you are not [a] robot" by following a three-step process, a prevalent tactic called ClickFix. This involves instructing the potential victim to open the Windows Run dialog prompt, paste an already copied command into the "verification window" (i.e., the Run dialog), and press enter. This effectively causes the obfuscated PowerShell command to be executed, resulting in the retrieval of a next-stage payload from an external server ("llll[.]fit"). The JavaScript payload ("gverify.js") is subsequently saved to the victim's Downloads folder and executed using cscript in a hidden window. The main goal of the intermediate script is to fetch the EDDIESTEALER binary from the same remote server and store it in the Downloads folder with a pseudorandom 12-character file name. Written in Rust, EDDIESTEALER is a commodity stealer malware that can gather system metadata, receive tasks from a command-and-control (C2) server, and siphon data of interest from the infected host. The exfiltration targets include cryptocurrency wallets, web browsers, password managers, FTP clients, and messaging apps. "These targets are subject to change as they are configurable by the C2 operator," Elastic explained. "EDDIESTEALER then reads the targeted files using standard kernel32.dll functions like CreateFileW, GetFileSizeEx, ReadFile, and CloseHandle." The collected host information is encrypted and transmitted to the C2 server in a separate HTTP POST request after the completion of each task. Besides incorporating string encryption, the malware employs a custom WinAPI lookup mechanism for resolving API calls and creates a mutex to ensure that only one version is running at any given time. It also incorporates checks to determine if it's being executed in a sandboxed environment, and if so, deletes itself from disk. "Based on a similar self-deletion technique observed in Latrodectus, EDDIESTEALER is capable of deleting itself through NTFS Alternate Data Streams renaming, to bypass file locks," Elastic noted. Another noteworthy feature built into the stealer is its ability to bypass Chromium's app-bound encryption to gain access to unencrypted sensitive data, such as cookies. This is accomplished by including a Rust implementation of ChromeKatz, an open-source tool that can dump cookies and credentials from the memory of Chromium-based browsers. The Rust version of ChromeKatz also incorporates changes to handle scenarios where the targeted Chromium browser is not running. In such cases, it spawns a new browser instance using the command-line arguments "--window-position=-3000,-3000 https://google.com," effectively positioning the new window far off-screen and making its invisible to the user. In opening the browser, the objective is to enable the malware to read the memory associated with the network service child process of Chrome that's identified by the "-utility-sub-type=network.mojom.NetworkService" flag and ultimately extract the credentials. Elastic said it also identified updated versions of the malware with features to harvest running processes, GPU information, number of CPU cores, CPU name, and CPU vendor. In addition, the new variants tweak the C2 communication pattern by preemptively sending the host information to the server before receiving the task configuration. That's not all. The encryption key used for client-to-server communication is hard-coded into the binary, as opposed to retrieving it dynamically from the server. Furthermore, the stealer has been found to launch a new Chrome process with the --remote-debugging-port=<port_num> flag to enable DevTools Protocol over a local WebSocket interface so as to interact with the browser in a headless manner, without requiring any user interaction. "This adoption of Rust in malware development reflects a growing trend among threat actors seeking to leverage modern language features for enhanced stealth, stability, and resilience against traditional analysis workflows and threat detection engines," the company said. The disclosure comes as c/side revealed details of a ClickFix campaign that targets multiple platforms, such as Apple macOS, Android, and iOS, using techniques like browser-based redirections, fake UI prompts, and drive-by download techniques. The attack chain starts with an obfuscated JavaScript hosted on a website, that when visited from macOS, initiates a series of redirections to a page that guides victims to launch Terminal and run a shell script, which leads to the download of a stealer malware that has been flagged on VirusTotal as the Atomic macOS Stealer (AMOS). However, the same campaign has been configured to initiate a drive-by download scheme when visiting the web page from an Android, iOS, or Windows device, leading to the deployment of another trojan malware. The disclosures coincide with the emergence of new stealer malware families like Katz Stealer and AppleProcessHub Stealer targeting Windows and macOS respectively, and are capable of harvesting a wide range of information from infected hosts, according to Nextron and Kandji. Katz Stealer, like EDDIESTEALER, is engineered to circumvent Chrome's app-bound encryption, but in a different way by employing DLL injection to obtain the encryption key without administrator privileges and use it to decrypt encrypted cookies and passwords from Chromium-based browsers. "Attackers conceal malicious JavaScript in gzip files, which, when opened, trigger the download of a PowerShell script," Nextron said. "This script retrieves a .NET-based loader payload, which injects the stealer into a legitimate process. Once active, it exfiltrates stolen data to the command and control server." AppleProcessHub Stealer, on the other hand, is designed to exfiltrate user files including bash history, zsh history, GitHub configurations, SSH information, and iCloud Keychain. Attack sequences distributing the malware entail the use of a Mach-O binary that downloads a second-stage bash stealer script from the server "appleprocesshub[.]com" and runs it, the results of which are then exfiltrated back to the C2 server. Details of the malware were first shared by the MalwareHunterTeam on May 15, 2025, and by MacPaw's Moonlock Lab last week. "This is an example of a Mach-O written in Objective-C which communicates with a command and control server to execute scripts," Kandji researcher Christopher Lopez said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    12 Reacties 0 aandelen