• Plagiarism by AI is becoming a thing. Multiple artists are affected, and soon, your creations might be next. With the rise of AI and generative AI, those "inspiring" social accounts are copying and republishing content like it's nothing, just to get views and make money. They’ve found ways to automate their processes, and it’s kind of boring to think about. We already knew they were using generative AI for eye-catching visuals to drive engagement, but it’s just the same old story.

    #Plagiarism #AIArt #GenerativeAI #ContentCreation #ArtistsRights
    Plagiarism by AI is becoming a thing. Multiple artists are affected, and soon, your creations might be next. With the rise of AI and generative AI, those "inspiring" social accounts are copying and republishing content like it's nothing, just to get views and make money. They’ve found ways to automate their processes, and it’s kind of boring to think about. We already knew they were using generative AI for eye-catching visuals to drive engagement, but it’s just the same old story. #Plagiarism #AIArt #GenerativeAI #ContentCreation #ArtistsRights
    3dvf.com
    Avec l’essor des IA et IA génératives, les comptes sociaux « inspirants » qui copient et republient en masse du contenu pour créer une audience à monétiser ont trouvé des moyens efficaces pour automatiser leur processus.On savait déjà que ces p
    1 Kommentare ·0 Geteilt ·0 Bewertungen
  • Managers rethink ecological scenarios as threats rise amid climate change

    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate.

    Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes.

    The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely.

    To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform.

    As ecologists and a climate scientist, we’re helping them figure out how to do that.

    Managing changing ecosystems

    Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically.

    However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions.

    Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways.

    When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict.

    To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible?

    Preparing for multiple possibilities

    At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future.

    It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected.

    In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees.

    While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires.

    The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely.

    For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run.

    Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices.

    Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon.

    Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios

    To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies.

    We identified three key ingredients for constructing credible ecological scenarios:

    1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes.

    2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future.

    3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely.

    Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change.

    What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight.

    Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems.

    Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder.

    Brian W. Miller is a research ecologist at the U.S. Geological Survey.

    Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #managers #rethink #ecological #scenarios #threats
    Managers rethink ecological scenarios as threats rise amid climate change
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article. #managers #rethink #ecological #scenarios #threats
    Managers rethink ecological scenarios as threats rise amid climate change
    www.fastcompany.com
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. [Photos: T. Walz, M. Lavin, C. Helzer, O. Richmond, NPS (top to bottom)., CC BY] Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • How microwave tech can help reclaim critical materials from e-waste

    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens?

    At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind.

    Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them.

    Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference.

    As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics.

    How much e-waste are we talking about?

    Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations.

    Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S.

    The tiny specks matter

    Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors.

    All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages.

    Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices.

    These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between to per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at per kilogram in 2024.

    Revolutionizing recycling with microwaves

    At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials?

    If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently.

    The solution we found: microwaves.

    This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat.

    In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures.

    In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice.

    This metal can then be easily separated from leftover waste using filters.

    So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes.

    Why the Department of Defense is interested

    Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA.

    Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern.

    We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed.

    If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials.

    How to make e-waste recycling common

    One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals.

    But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials.

    Terence Musho is an associate professor of engineering at West Virginia University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #how #microwave #tech #can #help
    How microwave tech can help reclaim critical materials from e-waste
    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens? At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind. Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them. Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference. As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics. How much e-waste are we talking about? Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations. Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S. The tiny specks matter Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors. All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages. Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices. These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between to per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at per kilogram in 2024. Revolutionizing recycling with microwaves At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials? If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently. The solution we found: microwaves. This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat. In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures. In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice. This metal can then be easily separated from leftover waste using filters. So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes. Why the Department of Defense is interested Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA. Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern. We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed. If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials. How to make e-waste recycling common One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals. But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials. Terence Musho is an associate professor of engineering at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #how #microwave #tech #can #help
    How microwave tech can help reclaim critical materials from e-waste
    www.fastcompany.com
    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens? At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind. Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them. Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference. As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics. How much e-waste are we talking about? Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations. Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S. The tiny specks matter Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors. All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages. Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices. These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between $220 to $500 per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at $9.48 per kilogram in 2024. Revolutionizing recycling with microwaves At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials? If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently. The solution we found: microwaves. This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat. In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures. In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice. This metal can then be easily separated from leftover waste using filters. So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes. Why the Department of Defense is interested Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA. Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern. We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed. If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials. How to make e-waste recycling common One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals. But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials. Terence Musho is an associate professor of engineering at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • What DEI actually does for the economy

    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI.

    Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants.

    These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy.

    Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law.

    Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy?

    As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research.

    Who gains from DEI?

    In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction.

    Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender.

    A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline.

    But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc.

    Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access.

    Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth.

    The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation.

    Ultimately, the U.S. economy is more profitable and productive as a result of immigrants.

    What does DEI cost?

    While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense.

    The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity.

    But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves.

    That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases.

    The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself.

    What lies ahead for DEI?

    People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement.

    Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women.

    At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively.

    Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually.

    In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence.

    Rodney Coates is a professor of critical race and ethnic studies at Miami University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #what #dei #actually #does #economy
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #what #dei #actually #does #economy
    What DEI actually does for the economy
    www.fastcompany.com
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated $7.5 billion on DEI programs. And in 2023, the federal government spent more than $100 million on DEI, including $38.7 million by the Department of Health and Human Services and another $86.5 million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of $1,005 compared with $1,202 for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at $976,800 and $1.2 million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy $16 trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by $2.7 trillion, added up to $113 billion in lifetime earnings through higher college enrollment, and generated $13 trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • How white-tailed deer came back from the brink of extinction

    Given their abundance in American backyards, gardens and highway corridors these days, it may be surprising to learn that white-tailed deer were nearly extinct about a century ago. While they currently number somewhere in the range of 30 million to 35 million, at the turn of the 20th century, there were as few as 300,000 whitetails across the entire continent: just 1% of the current population.

    This near-disappearance of deer was much discussed at the time. In 1854, Henry David Thoreau had written that no deer had been hunted near Concord, Massachusetts, for a generation. In his famous “Walden,” he reported:

    “One man still preserves the horns of the last deer that was killed in this vicinity, and another has told me the particulars of the hunt in which his uncle was engaged. The hunters were formerly a numerous and merry crew here.”

    But what happened to white-tailed deer? What drove them nearly to extinction, and then what brought them back from the brink?

    As a historical ecologist and environmental archaeologist, I have made it my job to answer these questions. Over the past decade, I’ve studied white-tailed deer bones from archaeological sites across the eastern United States, as well as historical records and ecological data, to help piece together the story of this species.

    Precolonial rise of deer populations

    White-tailed deer have been hunted from the earliest migrations of people into North America, more than 15,000 years ago. The species was far from the most important food resource at that time, though.

    Archaeological evidence suggests that white-tailed deer abundance only began to increase after the extinction of megafauna species like mammoths and mastodons opened up ecological niches for deer to fill. Deer bones become very common in archaeological sites from about 6,000 years ago onward, reflecting the economic and cultural importance of the species for Indigenous peoples.

    Despite being so frequently hunted, deer populations do not seem to have appreciably declined due to Indigenous hunting prior to AD 1600. Unlike elk or sturgeon, whose numbers were reduced by Indigenous hunters and fishers, white-tailed deer seem to have been resilient to human predation. While archaeologists have found some evidence for human-caused declines in certain parts of North America, other cases are more ambiguous, and deer certainly remained abundant throughout the past several millennia.

    Human use of fire could partly explain why white-tailed deer may have been resilient to hunting. Indigenous peoples across North America have long used controlled burning to promote ecosystem health, disturbing old vegetation to promote new growth. Deer love this sort of successional vegetation for food and cover, and thus thrive in previously burned habitats. Indigenous people may have therefore facilitated deer population growth, counteracting any harmful hunting pressure.

    More research is needed, but even though some hunting pressure is evident, the general picture from the precolonial era is that deer seem to have been doing just fine for thousands of years. Ecologists estimate that there were roughly 30 million white-tailed deer in North America on the eve of European colonization—about the same number as today.

    A 16th-century engraving depicts Indigenous Floridians hunting deer while disguised in deerskins.Colonial-era fall of deer numbers

    To better understand how deer populations changed in the colonial era, I recently analyzed deer bones from two archaeological sites in what is now Connecticut. My analysis suggests that hunting pressure on white-tailed deer increased almost as soon as European colonists arrived.

    At one site dated to the 11th to 14th centuriesI found that only about 7% to 10% of the deer killed were juveniles.

    Hunters generally don’t take juvenile deer if they’re frequently encountering adults, since adult deer tend to be larger, offering more meat and bigger hides. Additionally, hunting increases mortality on a deer herd but doesn’t directly affect fertility, so deer populations experiencing hunting pressure end up with juvenile-skewed age structures. For these reasons, this low percentage of juvenile deer prior to European colonization indicates minimal hunting pressure on local herds.

    However, at a nearby site occupied during the 17th century—just after European colonization—between 22% and 31% of the deer hunted were juveniles, suggesting a substantial increase in hunting pressure.

    This elevated hunting pressure likely resulted from the transformation of deer into a commodity for the first time. Venison, antlers and deerskins may have long been exchanged within Indigenous trade networks, but things changed drastically in the 17th century. European colonists integrated North America into a trans-Atlantic mercantile capitalist economic system with no precedent in Indigenous society. This applied new pressures to the continent’s natural resources.

    Deer—particularly their skins—were commodified and sold in markets in the colonies initially and, by the 18th century, in Europe as well. Deer were now being exploited by traders, merchants and manufacturers desiring profit, not simply hunters desiring meat or leather. It was the resulting hunting pressure that drove the species toward its extinction.

    20th-century rebound of white-tailed deer

    Thanks to the rise of the conservation movement in the late 19th and early 20th centuries, white-tailed deer survived their brush with extinction.

    Concerned citizens and outdoorsmen feared for the fate of deer and other wildlife, and pushed for new legislative protections.

    The Lacey Act of 1900, for example, banned interstate transport of poached game and—in combination with state-level protections—helped end commercial deer hunting by effectively de-commodifying the species. Aided by conservation-oriented hunting practices and reintroductions of deer from surviving populations to areas where they had been extirpated, white-tailed deer rebounded.

    The story of white-tailed deer underscores an important fact: Humans are not inherently damaging to the environment. Hunting from the 17th through 19th centuries threatened the existence of white-tailed deer, but precolonial Indigenous hunting and environmental management appear to have been relatively sustainable, and modern regulatory governance in the 20th century forestalled and reversed their looming extinction.

    Elic Weitzel, Peter Buck Postdoctoral Research Fellow, Smithsonian Institution

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #how #whitetaileddeer #came #back #brink
    How white-tailed deer came back from the brink of extinction
    Given their abundance in American backyards, gardens and highway corridors these days, it may be surprising to learn that white-tailed deer were nearly extinct about a century ago. While they currently number somewhere in the range of 30 million to 35 million, at the turn of the 20th century, there were as few as 300,000 whitetails across the entire continent: just 1% of the current population. This near-disappearance of deer was much discussed at the time. In 1854, Henry David Thoreau had written that no deer had been hunted near Concord, Massachusetts, for a generation. In his famous “Walden,” he reported: “One man still preserves the horns of the last deer that was killed in this vicinity, and another has told me the particulars of the hunt in which his uncle was engaged. The hunters were formerly a numerous and merry crew here.” But what happened to white-tailed deer? What drove them nearly to extinction, and then what brought them back from the brink? As a historical ecologist and environmental archaeologist, I have made it my job to answer these questions. Over the past decade, I’ve studied white-tailed deer bones from archaeological sites across the eastern United States, as well as historical records and ecological data, to help piece together the story of this species. Precolonial rise of deer populations White-tailed deer have been hunted from the earliest migrations of people into North America, more than 15,000 years ago. The species was far from the most important food resource at that time, though. Archaeological evidence suggests that white-tailed deer abundance only began to increase after the extinction of megafauna species like mammoths and mastodons opened up ecological niches for deer to fill. Deer bones become very common in archaeological sites from about 6,000 years ago onward, reflecting the economic and cultural importance of the species for Indigenous peoples. Despite being so frequently hunted, deer populations do not seem to have appreciably declined due to Indigenous hunting prior to AD 1600. Unlike elk or sturgeon, whose numbers were reduced by Indigenous hunters and fishers, white-tailed deer seem to have been resilient to human predation. While archaeologists have found some evidence for human-caused declines in certain parts of North America, other cases are more ambiguous, and deer certainly remained abundant throughout the past several millennia. Human use of fire could partly explain why white-tailed deer may have been resilient to hunting. Indigenous peoples across North America have long used controlled burning to promote ecosystem health, disturbing old vegetation to promote new growth. Deer love this sort of successional vegetation for food and cover, and thus thrive in previously burned habitats. Indigenous people may have therefore facilitated deer population growth, counteracting any harmful hunting pressure. More research is needed, but even though some hunting pressure is evident, the general picture from the precolonial era is that deer seem to have been doing just fine for thousands of years. Ecologists estimate that there were roughly 30 million white-tailed deer in North America on the eve of European colonization—about the same number as today. A 16th-century engraving depicts Indigenous Floridians hunting deer while disguised in deerskins.Colonial-era fall of deer numbers To better understand how deer populations changed in the colonial era, I recently analyzed deer bones from two archaeological sites in what is now Connecticut. My analysis suggests that hunting pressure on white-tailed deer increased almost as soon as European colonists arrived. At one site dated to the 11th to 14th centuriesI found that only about 7% to 10% of the deer killed were juveniles. Hunters generally don’t take juvenile deer if they’re frequently encountering adults, since adult deer tend to be larger, offering more meat and bigger hides. Additionally, hunting increases mortality on a deer herd but doesn’t directly affect fertility, so deer populations experiencing hunting pressure end up with juvenile-skewed age structures. For these reasons, this low percentage of juvenile deer prior to European colonization indicates minimal hunting pressure on local herds. However, at a nearby site occupied during the 17th century—just after European colonization—between 22% and 31% of the deer hunted were juveniles, suggesting a substantial increase in hunting pressure. This elevated hunting pressure likely resulted from the transformation of deer into a commodity for the first time. Venison, antlers and deerskins may have long been exchanged within Indigenous trade networks, but things changed drastically in the 17th century. European colonists integrated North America into a trans-Atlantic mercantile capitalist economic system with no precedent in Indigenous society. This applied new pressures to the continent’s natural resources. Deer—particularly their skins—were commodified and sold in markets in the colonies initially and, by the 18th century, in Europe as well. Deer were now being exploited by traders, merchants and manufacturers desiring profit, not simply hunters desiring meat or leather. It was the resulting hunting pressure that drove the species toward its extinction. 20th-century rebound of white-tailed deer Thanks to the rise of the conservation movement in the late 19th and early 20th centuries, white-tailed deer survived their brush with extinction. Concerned citizens and outdoorsmen feared for the fate of deer and other wildlife, and pushed for new legislative protections. The Lacey Act of 1900, for example, banned interstate transport of poached game and—in combination with state-level protections—helped end commercial deer hunting by effectively de-commodifying the species. Aided by conservation-oriented hunting practices and reintroductions of deer from surviving populations to areas where they had been extirpated, white-tailed deer rebounded. The story of white-tailed deer underscores an important fact: Humans are not inherently damaging to the environment. Hunting from the 17th through 19th centuries threatened the existence of white-tailed deer, but precolonial Indigenous hunting and environmental management appear to have been relatively sustainable, and modern regulatory governance in the 20th century forestalled and reversed their looming extinction. Elic Weitzel, Peter Buck Postdoctoral Research Fellow, Smithsonian Institution This article is republished from The Conversation under a Creative Commons license. Read the original article. #how #whitetaileddeer #came #back #brink
    How white-tailed deer came back from the brink of extinction
    www.fastcompany.com
    Given their abundance in American backyards, gardens and highway corridors these days, it may be surprising to learn that white-tailed deer were nearly extinct about a century ago. While they currently number somewhere in the range of 30 million to 35 million, at the turn of the 20th century, there were as few as 300,000 whitetails across the entire continent: just 1% of the current population. This near-disappearance of deer was much discussed at the time. In 1854, Henry David Thoreau had written that no deer had been hunted near Concord, Massachusetts, for a generation. In his famous “Walden,” he reported: “One man still preserves the horns of the last deer that was killed in this vicinity, and another has told me the particulars of the hunt in which his uncle was engaged. The hunters were formerly a numerous and merry crew here.” But what happened to white-tailed deer? What drove them nearly to extinction, and then what brought them back from the brink? As a historical ecologist and environmental archaeologist, I have made it my job to answer these questions. Over the past decade, I’ve studied white-tailed deer bones from archaeological sites across the eastern United States, as well as historical records and ecological data, to help piece together the story of this species. Precolonial rise of deer populations White-tailed deer have been hunted from the earliest migrations of people into North America, more than 15,000 years ago. The species was far from the most important food resource at that time, though. Archaeological evidence suggests that white-tailed deer abundance only began to increase after the extinction of megafauna species like mammoths and mastodons opened up ecological niches for deer to fill. Deer bones become very common in archaeological sites from about 6,000 years ago onward, reflecting the economic and cultural importance of the species for Indigenous peoples. Despite being so frequently hunted, deer populations do not seem to have appreciably declined due to Indigenous hunting prior to AD 1600. Unlike elk or sturgeon, whose numbers were reduced by Indigenous hunters and fishers, white-tailed deer seem to have been resilient to human predation. While archaeologists have found some evidence for human-caused declines in certain parts of North America, other cases are more ambiguous, and deer certainly remained abundant throughout the past several millennia. Human use of fire could partly explain why white-tailed deer may have been resilient to hunting. Indigenous peoples across North America have long used controlled burning to promote ecosystem health, disturbing old vegetation to promote new growth. Deer love this sort of successional vegetation for food and cover, and thus thrive in previously burned habitats. Indigenous people may have therefore facilitated deer population growth, counteracting any harmful hunting pressure. More research is needed, but even though some hunting pressure is evident, the general picture from the precolonial era is that deer seem to have been doing just fine for thousands of years. Ecologists estimate that there were roughly 30 million white-tailed deer in North America on the eve of European colonization—about the same number as today. A 16th-century engraving depicts Indigenous Floridians hunting deer while disguised in deerskins. [Photo: Theodor de Bry/DEA Picture Library/De Agostini/Getty Images] Colonial-era fall of deer numbers To better understand how deer populations changed in the colonial era, I recently analyzed deer bones from two archaeological sites in what is now Connecticut. My analysis suggests that hunting pressure on white-tailed deer increased almost as soon as European colonists arrived. At one site dated to the 11th to 14th centuries (before European colonization) I found that only about 7% to 10% of the deer killed were juveniles. Hunters generally don’t take juvenile deer if they’re frequently encountering adults, since adult deer tend to be larger, offering more meat and bigger hides. Additionally, hunting increases mortality on a deer herd but doesn’t directly affect fertility, so deer populations experiencing hunting pressure end up with juvenile-skewed age structures. For these reasons, this low percentage of juvenile deer prior to European colonization indicates minimal hunting pressure on local herds. However, at a nearby site occupied during the 17th century—just after European colonization—between 22% and 31% of the deer hunted were juveniles, suggesting a substantial increase in hunting pressure. This elevated hunting pressure likely resulted from the transformation of deer into a commodity for the first time. Venison, antlers and deerskins may have long been exchanged within Indigenous trade networks, but things changed drastically in the 17th century. European colonists integrated North America into a trans-Atlantic mercantile capitalist economic system with no precedent in Indigenous society. This applied new pressures to the continent’s natural resources. Deer—particularly their skins—were commodified and sold in markets in the colonies initially and, by the 18th century, in Europe as well. Deer were now being exploited by traders, merchants and manufacturers desiring profit, not simply hunters desiring meat or leather. It was the resulting hunting pressure that drove the species toward its extinction. 20th-century rebound of white-tailed deer Thanks to the rise of the conservation movement in the late 19th and early 20th centuries, white-tailed deer survived their brush with extinction. Concerned citizens and outdoorsmen feared for the fate of deer and other wildlife, and pushed for new legislative protections. The Lacey Act of 1900, for example, banned interstate transport of poached game and—in combination with state-level protections—helped end commercial deer hunting by effectively de-commodifying the species. Aided by conservation-oriented hunting practices and reintroductions of deer from surviving populations to areas where they had been extirpated, white-tailed deer rebounded. The story of white-tailed deer underscores an important fact: Humans are not inherently damaging to the environment. Hunting from the 17th through 19th centuries threatened the existence of white-tailed deer, but precolonial Indigenous hunting and environmental management appear to have been relatively sustainable, and modern regulatory governance in the 20th century forestalled and reversed their looming extinction. Elic Weitzel, Peter Buck Postdoctoral Research Fellow, Smithsonian Institution This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Want to lower your dementia risk? Start by stressing less

    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk.

    Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia.

    We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age.

    Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life.

    Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging.

    Social isolation and stress

    Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health.

    In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age.

    It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other.

    For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline.

    Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences.

    Stress is often missing from dementia prevention efforts

    A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement.

    What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress.

    Avoiding stressful events and difficult life circumstances is typically not an option.

    Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age.

    Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits.

    Lifestyle changes to manage stress and lessen dementia risk

    The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood.

    Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress:

    Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference.

    Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable.

    If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress.

    If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits.

    Walkable neighborhoods, lifelong learning

    A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well.

    However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health.

    Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully.

    In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier.

    Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people.

    Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State.

    Martin J. Sliwinski is a professor of human development and family studies at Penn State.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #want #lower #your #dementia #risk
    Want to lower your dementia risk? Start by stressing less
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article. #want #lower #your #dementia #risk
    Want to lower your dementia risk? Start by stressing less
    www.fastcompany.com
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    12 Kommentare ·0 Geteilt ·0 Bewertungen
  • This Cat Poop Parasite Can Decapitate Sperm—and It Might Be Fueling Infertility

    Male fertility rates have been plummeting over the past half-century. An analysis from 1992 noted a steady decrease in sperm counts and quality since the 1940s. A more recent study found that male infertility rates increased nearly 80% from 1990 to 2019. The reasons driving this trend remain a mystery, but frequently cited culprits include obesity, poor diet, and environmental toxins. Infectious diseases such as gonorrhea or chlamydia are often overlooked factors that affect fertility in men. Accumulating evidence suggests that a common single-celled parasite called Toxoplasma gondii may also be a contributor: An April 2025 study showed for the first time that “human sperm lose their heads upon direct contact” with the parasite. I am a microbiologist, and my lab studies Toxoplasma. This new study bolsters emerging findings that underscore the importance of preventing this parasitic infection.

    The many ways you can get toxoplasmosis Infected cats defecate Toxoplasma eggs into the litter box, garden or other places in the environment where they can be picked up by humans or other animals. Water, shellfish and unwashed fruits and vegetables can also harbor infectious parasite eggs. In addition to eggs, tissue cysts present in the meat of warm-blooded animals can spread toxoplasmosis as well if they are not destroyed by cooking to proper temperature. While most hosts of the parasite can control the initial infection with few if any symptoms, Toxoplasma remains in the body for life as dormant cysts in brain, heart and muscle tissue. These cysts can reactivate and cause additional episodes of severe illness that damage critical organ systems. Between 30% and 50% of the world’s population is permanently infected with Toxoplasma due to the many ways the parasite can spread. Toxoplasma can target male reproductive organs Upon infection, Toxoplasma spreads to virtually every organ and skeletal muscle. Evidence that Toxoplasma can also target human male reproductive organs first surfaced during the height of the AIDS pandemic in the 1980s, when some patients presented with the parasitic infection in their testes.

    While immunocompromised patients are most at risk for testicular toxoplasmosis, it can also occur in otherwise healthy individuals. Imaging studies of infected mice confirm that Toxoplasma parasites quickly travel to the testes in addition to the brain and eyes within days of infection. Toxoplasma cysts floating in cat feces. DPDx Image Library/CDC In 2017, my colleagues and I found that Toxoplasma can also form cysts in mouse prostates. Researchers have also observed these parasites in the ejaculate of many animals, including human semen, raising the possibility of sexual transmission.

    Knowing that Toxoplasma can reside in male reproductive organs has prompted analyses of fertility in infected men. A small 2021 study in Prague of 163 men infected with Toxoplasma found that over 86% had semen anomalies. A 2002 study in China found that infertile couples are more likely to have a Toxoplasma infection than fertile couples, 34.83% versus 12.11%. A 2005 study in China also found that sterile men are more likely to test positive for Toxoplasma than fertile men. Not all studies, however, produce a link between toxoplasmosis and sperm quality.

    Toxoplasma can directly damage human sperm Toxoplasmosis in animals mirrors infection in humans, which allows researchers to address questions that are not easy to examine in people. Testicular function and sperm production are sharply diminished in Toxoplasma-infected mice, rats and rams. Infected mice have significantly lower sperm counts and a higher proportion of abnormally shaped sperm. In that April 2025 study, researchers from Germany, Uruguay, and Chile observed that Toxoplasma can reach the testes and epididymis, the tube where sperm mature and are stored, two days after infection in mice. This finding prompted the team to test what happens when the parasite comes into direct contact with human sperm in a test tube.

    After only five minutes of exposure to the parasite, 22.4% of sperm cells were beheaded. The number of decapitated sperm increased the longer they interacted with the parasites. Sperm cells that maintained their head were often twisted and misshapen. Some sperm cells had holes in their head, suggesting the parasites were trying to invade them as it would any other type of cell in the organs it infiltrates. In addition to direct contact, Toxoplasma may also damage sperm because the infection promotes chronic inflammation. Inflammatory conditions in the male reproductive tract are harmful to sperm production and function. The researchers speculate that the harmful effects Toxoplasma may have on sperm could be contributing to large global declines in male fertility over the past decades. Sperm exposed to Toxoplasma. Arrows point to holes and other damage to the sperm; asterisks indicate where the parasite has burrowed. The two nonconfronted controls at the bottom show normal sperm. Rojas-Barón et al/The FEBS Journal, CC BY-SA Preventing toxoplasmosis The evidence that Toxoplasma can infiltrate male reproductive organs in animals is compelling, but whether this produces health issues in people remains unclear. Testicular toxoplasmosis shows that parasites can invade human testes, but symptomatic disease is very rare. Studies to date that show defects in the sperm of infected men are too small to draw firm conclusions at this time.

    Additionally, some reports suggest that rates of toxoplasmosis in high-income countries have not been increasing over the past few decades while male infertility was rising, so it’s likely to only be one part of the puzzle. Regardless of this parasite’s potential effect on fertility, it is wise to avoid Toxoplasma. An infection can cause miscarriage or birth defects if someone acquires it for the first time during pregnancy, and it can be life-threatening for immunocompromised people. Toxoplasma is also the leading cause of death from foodborne illness in the United States. Taking proper care of your cat, promptly cleaning the litter box and thoroughly washing your hands after can help reduce your exposure to Toxoplasma. You can also protect yourself from this parasite by washing fruits and vegetables, cooking meat to proper temperatures before consuming and avoiding raw shellfish, raw water and raw milk. Bill Sullivan, Professor of Microbiology and Immunology, Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #this #cat #poop #parasite #can
    This Cat Poop Parasite Can Decapitate Sperm—and It Might Be Fueling Infertility
    Male fertility rates have been plummeting over the past half-century. An analysis from 1992 noted a steady decrease in sperm counts and quality since the 1940s. A more recent study found that male infertility rates increased nearly 80% from 1990 to 2019. The reasons driving this trend remain a mystery, but frequently cited culprits include obesity, poor diet, and environmental toxins. Infectious diseases such as gonorrhea or chlamydia are often overlooked factors that affect fertility in men. Accumulating evidence suggests that a common single-celled parasite called Toxoplasma gondii may also be a contributor: An April 2025 study showed for the first time that “human sperm lose their heads upon direct contact” with the parasite. I am a microbiologist, and my lab studies Toxoplasma. This new study bolsters emerging findings that underscore the importance of preventing this parasitic infection. The many ways you can get toxoplasmosis Infected cats defecate Toxoplasma eggs into the litter box, garden or other places in the environment where they can be picked up by humans or other animals. Water, shellfish and unwashed fruits and vegetables can also harbor infectious parasite eggs. In addition to eggs, tissue cysts present in the meat of warm-blooded animals can spread toxoplasmosis as well if they are not destroyed by cooking to proper temperature. While most hosts of the parasite can control the initial infection with few if any symptoms, Toxoplasma remains in the body for life as dormant cysts in brain, heart and muscle tissue. These cysts can reactivate and cause additional episodes of severe illness that damage critical organ systems. Between 30% and 50% of the world’s population is permanently infected with Toxoplasma due to the many ways the parasite can spread. Toxoplasma can target male reproductive organs Upon infection, Toxoplasma spreads to virtually every organ and skeletal muscle. Evidence that Toxoplasma can also target human male reproductive organs first surfaced during the height of the AIDS pandemic in the 1980s, when some patients presented with the parasitic infection in their testes. While immunocompromised patients are most at risk for testicular toxoplasmosis, it can also occur in otherwise healthy individuals. Imaging studies of infected mice confirm that Toxoplasma parasites quickly travel to the testes in addition to the brain and eyes within days of infection. Toxoplasma cysts floating in cat feces. DPDx Image Library/CDC In 2017, my colleagues and I found that Toxoplasma can also form cysts in mouse prostates. Researchers have also observed these parasites in the ejaculate of many animals, including human semen, raising the possibility of sexual transmission. Knowing that Toxoplasma can reside in male reproductive organs has prompted analyses of fertility in infected men. A small 2021 study in Prague of 163 men infected with Toxoplasma found that over 86% had semen anomalies. A 2002 study in China found that infertile couples are more likely to have a Toxoplasma infection than fertile couples, 34.83% versus 12.11%. A 2005 study in China also found that sterile men are more likely to test positive for Toxoplasma than fertile men. Not all studies, however, produce a link between toxoplasmosis and sperm quality. Toxoplasma can directly damage human sperm Toxoplasmosis in animals mirrors infection in humans, which allows researchers to address questions that are not easy to examine in people. Testicular function and sperm production are sharply diminished in Toxoplasma-infected mice, rats and rams. Infected mice have significantly lower sperm counts and a higher proportion of abnormally shaped sperm. In that April 2025 study, researchers from Germany, Uruguay, and Chile observed that Toxoplasma can reach the testes and epididymis, the tube where sperm mature and are stored, two days after infection in mice. This finding prompted the team to test what happens when the parasite comes into direct contact with human sperm in a test tube. After only five minutes of exposure to the parasite, 22.4% of sperm cells were beheaded. The number of decapitated sperm increased the longer they interacted with the parasites. Sperm cells that maintained their head were often twisted and misshapen. Some sperm cells had holes in their head, suggesting the parasites were trying to invade them as it would any other type of cell in the organs it infiltrates. In addition to direct contact, Toxoplasma may also damage sperm because the infection promotes chronic inflammation. Inflammatory conditions in the male reproductive tract are harmful to sperm production and function. The researchers speculate that the harmful effects Toxoplasma may have on sperm could be contributing to large global declines in male fertility over the past decades. Sperm exposed to Toxoplasma. Arrows point to holes and other damage to the sperm; asterisks indicate where the parasite has burrowed. The two nonconfronted controls at the bottom show normal sperm. Rojas-Barón et al/The FEBS Journal, CC BY-SA Preventing toxoplasmosis The evidence that Toxoplasma can infiltrate male reproductive organs in animals is compelling, but whether this produces health issues in people remains unclear. Testicular toxoplasmosis shows that parasites can invade human testes, but symptomatic disease is very rare. Studies to date that show defects in the sperm of infected men are too small to draw firm conclusions at this time. Additionally, some reports suggest that rates of toxoplasmosis in high-income countries have not been increasing over the past few decades while male infertility was rising, so it’s likely to only be one part of the puzzle. Regardless of this parasite’s potential effect on fertility, it is wise to avoid Toxoplasma. An infection can cause miscarriage or birth defects if someone acquires it for the first time during pregnancy, and it can be life-threatening for immunocompromised people. Toxoplasma is also the leading cause of death from foodborne illness in the United States. Taking proper care of your cat, promptly cleaning the litter box and thoroughly washing your hands after can help reduce your exposure to Toxoplasma. You can also protect yourself from this parasite by washing fruits and vegetables, cooking meat to proper temperatures before consuming and avoiding raw shellfish, raw water and raw milk. Bill Sullivan, Professor of Microbiology and Immunology, Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #this #cat #poop #parasite #can
    This Cat Poop Parasite Can Decapitate Sperm—and It Might Be Fueling Infertility
    gizmodo.com
    Male fertility rates have been plummeting over the past half-century. An analysis from 1992 noted a steady decrease in sperm counts and quality since the 1940s. A more recent study found that male infertility rates increased nearly 80% from 1990 to 2019. The reasons driving this trend remain a mystery, but frequently cited culprits include obesity, poor diet, and environmental toxins. Infectious diseases such as gonorrhea or chlamydia are often overlooked factors that affect fertility in men. Accumulating evidence suggests that a common single-celled parasite called Toxoplasma gondii may also be a contributor: An April 2025 study showed for the first time that “human sperm lose their heads upon direct contact” with the parasite. I am a microbiologist, and my lab studies Toxoplasma. This new study bolsters emerging findings that underscore the importance of preventing this parasitic infection. The many ways you can get toxoplasmosis Infected cats defecate Toxoplasma eggs into the litter box, garden or other places in the environment where they can be picked up by humans or other animals. Water, shellfish and unwashed fruits and vegetables can also harbor infectious parasite eggs. In addition to eggs, tissue cysts present in the meat of warm-blooded animals can spread toxoplasmosis as well if they are not destroyed by cooking to proper temperature. While most hosts of the parasite can control the initial infection with few if any symptoms, Toxoplasma remains in the body for life as dormant cysts in brain, heart and muscle tissue. These cysts can reactivate and cause additional episodes of severe illness that damage critical organ systems. Between 30% and 50% of the world’s population is permanently infected with Toxoplasma due to the many ways the parasite can spread. Toxoplasma can target male reproductive organs Upon infection, Toxoplasma spreads to virtually every organ and skeletal muscle. Evidence that Toxoplasma can also target human male reproductive organs first surfaced during the height of the AIDS pandemic in the 1980s, when some patients presented with the parasitic infection in their testes. While immunocompromised patients are most at risk for testicular toxoplasmosis, it can also occur in otherwise healthy individuals. Imaging studies of infected mice confirm that Toxoplasma parasites quickly travel to the testes in addition to the brain and eyes within days of infection. Toxoplasma cysts floating in cat feces. DPDx Image Library/CDC In 2017, my colleagues and I found that Toxoplasma can also form cysts in mouse prostates. Researchers have also observed these parasites in the ejaculate of many animals, including human semen, raising the possibility of sexual transmission. Knowing that Toxoplasma can reside in male reproductive organs has prompted analyses of fertility in infected men. A small 2021 study in Prague of 163 men infected with Toxoplasma found that over 86% had semen anomalies. A 2002 study in China found that infertile couples are more likely to have a Toxoplasma infection than fertile couples, 34.83% versus 12.11%. A 2005 study in China also found that sterile men are more likely to test positive for Toxoplasma than fertile men. Not all studies, however, produce a link between toxoplasmosis and sperm quality. Toxoplasma can directly damage human sperm Toxoplasmosis in animals mirrors infection in humans, which allows researchers to address questions that are not easy to examine in people. Testicular function and sperm production are sharply diminished in Toxoplasma-infected mice, rats and rams. Infected mice have significantly lower sperm counts and a higher proportion of abnormally shaped sperm. In that April 2025 study, researchers from Germany, Uruguay, and Chile observed that Toxoplasma can reach the testes and epididymis, the tube where sperm mature and are stored, two days after infection in mice. This finding prompted the team to test what happens when the parasite comes into direct contact with human sperm in a test tube. After only five minutes of exposure to the parasite, 22.4% of sperm cells were beheaded. The number of decapitated sperm increased the longer they interacted with the parasites. Sperm cells that maintained their head were often twisted and misshapen. Some sperm cells had holes in their head, suggesting the parasites were trying to invade them as it would any other type of cell in the organs it infiltrates. In addition to direct contact, Toxoplasma may also damage sperm because the infection promotes chronic inflammation. Inflammatory conditions in the male reproductive tract are harmful to sperm production and function. The researchers speculate that the harmful effects Toxoplasma may have on sperm could be contributing to large global declines in male fertility over the past decades. Sperm exposed to Toxoplasma. Arrows point to holes and other damage to the sperm; asterisks indicate where the parasite has burrowed. The two nonconfronted controls at the bottom show normal sperm. Rojas-Barón et al/The FEBS Journal, CC BY-SA Preventing toxoplasmosis The evidence that Toxoplasma can infiltrate male reproductive organs in animals is compelling, but whether this produces health issues in people remains unclear. Testicular toxoplasmosis shows that parasites can invade human testes, but symptomatic disease is very rare. Studies to date that show defects in the sperm of infected men are too small to draw firm conclusions at this time. Additionally, some reports suggest that rates of toxoplasmosis in high-income countries have not been increasing over the past few decades while male infertility was rising, so it’s likely to only be one part of the puzzle. Regardless of this parasite’s potential effect on fertility, it is wise to avoid Toxoplasma. An infection can cause miscarriage or birth defects if someone acquires it for the first time during pregnancy, and it can be life-threatening for immunocompromised people. Toxoplasma is also the leading cause of death from foodborne illness in the United States. Taking proper care of your cat, promptly cleaning the litter box and thoroughly washing your hands after can help reduce your exposure to Toxoplasma. You can also protect yourself from this parasite by washing fruits and vegetables, cooking meat to proper temperatures before consuming and avoiding raw shellfish, raw water and raw milk. Bill Sullivan, Professor of Microbiology and Immunology, Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Researchers take a step toward carbon-capturing batteries

    What if there were a battery that could release energy while trapping carbon dioxide? This isn’t science fiction; it’s the promise of lithium-carbon dioxidebatteries, which are currently a hot research topic.

    Li-CO₂ batteries could be a two-in-one solution to the current problems of storing renewable energy and taking carbon emissions out of the air. They absorb carbon dioxide and convert it into a white powder called lithium carbonate while discharging energy.

    These batteries could have profound implications for cutting emissions from vehicles and industry—and might even enable long-duration missions on Mars, where the atmosphere is 95% CO₂.

    To make these batteries commercially viable, researchers have mainly been wrestling with problems related to recharging them. Now, our team at the University of Surrey has come up with a promising way forward. So how close are these “CO₂-breathing” batteries to becoming a practical reality?

    Like many great scientific breakthroughs, Li-CO₂ batteries were a happy accident. Slightly over a decade ago, a U.S.-French team of researchers were trying to address problems with lithium air batteries, another frontier energy-storage technology. Whereas today’s lithium-ion batteries generate power by moving and storing lithium ions within electrodes, lithium air batteries work by creating a chemical reaction between lithium and oxygen.

    The problem has been the “air” part, since even the tinyvolume of CO₂ that’s found in air is enough to disrupt this careful chemistry, producing unwanted lithium carbonate. As many battery scientists will tell you, the presence of Li₂CO₃ can also be a real pain in regular lithium-ion batteries, causing unhelpful side reactions and electrical resistance.

    Nonetheless the scientists noticed something interesting about this CO₂ contamination: It improved the battery’s amount of charge. From this point on, work began on intentionally adding CO₂ gas to batteries to take advantage of this, and the lithium-CO₂ battery was born.

    How it works

    Their great potential relates to the chemical reaction at the positive side of the battery, where small holes are cut in the casing to allow CO₂ gas in. There it dissolves in the liquid electrolyteand reacts with lithium that has already been dissolved there. During this reaction, it’s believed that four electrons are exchanged between lithium ions and carbon dioxide.

    This electron transfer determines the theoretical charge that can be stored in the battery. In a normal lithium-ion battery, the positive electrode exchanges just one electron per reaction.The greater exchange of electrons in the lithium-carbon dioxide battery, combined with the high voltage of the reaction, explains their potential to greatly outperform today’s lithium-ion batteries.

    However, the technology has a few issues. The batteries don’t last very long. Commercial lithium-ion packs routinely survive 1,000 to 10,000 charging cycles; most LiCO₂ prototypes fade after fewer than 100.

    They’re also difficult to recharge. This requires breaking down the lithium carbonate to release lithium and CO₂, which can be energy intensive. This energy requirement is a little like a hill that must be cycled up before the reaction can coast, and is known as overpotential.

    You can reduce this requirement by printing the right catalyst material on the porous positive electrode. Yet these catalysts are typically expensive and rare noble metals, such as ruthenium and platinum, making for a significant barrier to commercial viability.

    Our team has found an alternative catalyst, caesium phosphomolybdate, which is far cheaper and easy to manufacture at room temperature. This material made the batteries stable for 107 cycles, while also storing 2.5 times as much charge as a lithium ion. And we significantly reduced the energy cost involved in breaking down lithium carbonate, for an overpotential of 0.67 volts, which is only about double what would be necessary in a commercial product.

    Our research team is now working to further reduce the cost of this technology by developing a catalyst that replaces caesium, since it’s the phosphomolybdate that is key. This could make the system more economically viable and scalable for widespread deployment.

    We also plan to study how the battery charges and discharges in real time. This will provide a clearer understanding of the internal mechanisms at work, helping to optimize performance and durability.

    A major focus of upcoming tests will be to evaluate how the battery performs under different CO₂ pressures. So far, the system has only been tested under idealized conditions. If it can work at 0.1 bar of pressure, it will be feasible for car exhausts and gas boiler flues, meaning you could capture CO₂ while you drive or heat your home.

    Demonstrating that this works will be an important confirmation of commercial viability, albeit we would expect the battery’s charge capacity to reduce at this pressure. By our rough calculations, 1kg of catalyst could absorb around 18.5kg of CO₂. Since a car driving 100 miles emits around 18kg to 20kg of CO₂, that means such a battery could potentially offset a day’s drive.

    If the batteries work at 0.006 bar, the pressure on the Martian atmosphere, they could power anything from an exploration rover to a colony. At 0.0004 bar, Earth’s ambient air pressure, they could capture CO₂ from our atmosphere and store power anywhere. In all cases, the key question will be how it affects the battery’s charge capacity.

    Meanwhile, to improve the battery’s number of recharge cycles, we need to address the fact that the electrolyte dries out. We’re currently investigating solutions, which probably involve developing casings that only CO₂ can move into. As for reducing the energy required for the catalyst to work, it’s likely to require optimizing the battery’s geometry to maximize the reaction rate—and to introduce a flow of CO₂, comparable to how fuel cells work.

    If this continued work can push the battery’s cycle life above 1,000 cycles, cut overpotential below 0.3 V, and replace scarce elements entirely, commercial Li-CO₂ packs could become reality. Our experiments will determine just how versatile and far-reaching the battery’s applications might be, from carbon capture on Earth to powering missions on Mars.

    Daniel Commandeur is a Surrey Future Fellow at the School of Chemistry & Chemical Engineering at the University of Surrey.

    Mahsa Masoudi is a PhD researcher of chemical engineering at the University of Surrey.

    Siddharth Gadkari is a lecturer in chemical process engineering at the University of Surrey.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #researchers #take #step #toward #carboncapturing
    Researchers take a step toward carbon-capturing batteries
    What if there were a battery that could release energy while trapping carbon dioxide? This isn’t science fiction; it’s the promise of lithium-carbon dioxidebatteries, which are currently a hot research topic. Li-CO₂ batteries could be a two-in-one solution to the current problems of storing renewable energy and taking carbon emissions out of the air. They absorb carbon dioxide and convert it into a white powder called lithium carbonate while discharging energy. These batteries could have profound implications for cutting emissions from vehicles and industry—and might even enable long-duration missions on Mars, where the atmosphere is 95% CO₂. To make these batteries commercially viable, researchers have mainly been wrestling with problems related to recharging them. Now, our team at the University of Surrey has come up with a promising way forward. So how close are these “CO₂-breathing” batteries to becoming a practical reality? Like many great scientific breakthroughs, Li-CO₂ batteries were a happy accident. Slightly over a decade ago, a U.S.-French team of researchers were trying to address problems with lithium air batteries, another frontier energy-storage technology. Whereas today’s lithium-ion batteries generate power by moving and storing lithium ions within electrodes, lithium air batteries work by creating a chemical reaction between lithium and oxygen. The problem has been the “air” part, since even the tinyvolume of CO₂ that’s found in air is enough to disrupt this careful chemistry, producing unwanted lithium carbonate. As many battery scientists will tell you, the presence of Li₂CO₃ can also be a real pain in regular lithium-ion batteries, causing unhelpful side reactions and electrical resistance. Nonetheless the scientists noticed something interesting about this CO₂ contamination: It improved the battery’s amount of charge. From this point on, work began on intentionally adding CO₂ gas to batteries to take advantage of this, and the lithium-CO₂ battery was born. How it works Their great potential relates to the chemical reaction at the positive side of the battery, where small holes are cut in the casing to allow CO₂ gas in. There it dissolves in the liquid electrolyteand reacts with lithium that has already been dissolved there. During this reaction, it’s believed that four electrons are exchanged between lithium ions and carbon dioxide. This electron transfer determines the theoretical charge that can be stored in the battery. In a normal lithium-ion battery, the positive electrode exchanges just one electron per reaction.The greater exchange of electrons in the lithium-carbon dioxide battery, combined with the high voltage of the reaction, explains their potential to greatly outperform today’s lithium-ion batteries. However, the technology has a few issues. The batteries don’t last very long. Commercial lithium-ion packs routinely survive 1,000 to 10,000 charging cycles; most LiCO₂ prototypes fade after fewer than 100. They’re also difficult to recharge. This requires breaking down the lithium carbonate to release lithium and CO₂, which can be energy intensive. This energy requirement is a little like a hill that must be cycled up before the reaction can coast, and is known as overpotential. You can reduce this requirement by printing the right catalyst material on the porous positive electrode. Yet these catalysts are typically expensive and rare noble metals, such as ruthenium and platinum, making for a significant barrier to commercial viability. Our team has found an alternative catalyst, caesium phosphomolybdate, which is far cheaper and easy to manufacture at room temperature. This material made the batteries stable for 107 cycles, while also storing 2.5 times as much charge as a lithium ion. And we significantly reduced the energy cost involved in breaking down lithium carbonate, for an overpotential of 0.67 volts, which is only about double what would be necessary in a commercial product. Our research team is now working to further reduce the cost of this technology by developing a catalyst that replaces caesium, since it’s the phosphomolybdate that is key. This could make the system more economically viable and scalable for widespread deployment. We also plan to study how the battery charges and discharges in real time. This will provide a clearer understanding of the internal mechanisms at work, helping to optimize performance and durability. A major focus of upcoming tests will be to evaluate how the battery performs under different CO₂ pressures. So far, the system has only been tested under idealized conditions. If it can work at 0.1 bar of pressure, it will be feasible for car exhausts and gas boiler flues, meaning you could capture CO₂ while you drive or heat your home. Demonstrating that this works will be an important confirmation of commercial viability, albeit we would expect the battery’s charge capacity to reduce at this pressure. By our rough calculations, 1kg of catalyst could absorb around 18.5kg of CO₂. Since a car driving 100 miles emits around 18kg to 20kg of CO₂, that means such a battery could potentially offset a day’s drive. If the batteries work at 0.006 bar, the pressure on the Martian atmosphere, they could power anything from an exploration rover to a colony. At 0.0004 bar, Earth’s ambient air pressure, they could capture CO₂ from our atmosphere and store power anywhere. In all cases, the key question will be how it affects the battery’s charge capacity. Meanwhile, to improve the battery’s number of recharge cycles, we need to address the fact that the electrolyte dries out. We’re currently investigating solutions, which probably involve developing casings that only CO₂ can move into. As for reducing the energy required for the catalyst to work, it’s likely to require optimizing the battery’s geometry to maximize the reaction rate—and to introduce a flow of CO₂, comparable to how fuel cells work. If this continued work can push the battery’s cycle life above 1,000 cycles, cut overpotential below 0.3 V, and replace scarce elements entirely, commercial Li-CO₂ packs could become reality. Our experiments will determine just how versatile and far-reaching the battery’s applications might be, from carbon capture on Earth to powering missions on Mars. Daniel Commandeur is a Surrey Future Fellow at the School of Chemistry & Chemical Engineering at the University of Surrey. Mahsa Masoudi is a PhD researcher of chemical engineering at the University of Surrey. Siddharth Gadkari is a lecturer in chemical process engineering at the University of Surrey. This article is republished from The Conversation under a Creative Commons license. Read the original article. #researchers #take #step #toward #carboncapturing
    Researchers take a step toward carbon-capturing batteries
    www.fastcompany.com
    What if there were a battery that could release energy while trapping carbon dioxide? This isn’t science fiction; it’s the promise of lithium-carbon dioxide (Li-CO₂) batteries, which are currently a hot research topic. Li-CO₂ batteries could be a two-in-one solution to the current problems of storing renewable energy and taking carbon emissions out of the air. They absorb carbon dioxide and convert it into a white powder called lithium carbonate while discharging energy. These batteries could have profound implications for cutting emissions from vehicles and industry—and might even enable long-duration missions on Mars, where the atmosphere is 95% CO₂. To make these batteries commercially viable, researchers have mainly been wrestling with problems related to recharging them. Now, our team at the University of Surrey has come up with a promising way forward. So how close are these “CO₂-breathing” batteries to becoming a practical reality? Like many great scientific breakthroughs, Li-CO₂ batteries were a happy accident. Slightly over a decade ago, a U.S.-French team of researchers were trying to address problems with lithium air batteries, another frontier energy-storage technology. Whereas today’s lithium-ion batteries generate power by moving and storing lithium ions within electrodes, lithium air batteries work by creating a chemical reaction between lithium and oxygen. The problem has been the “air” part, since even the tiny (0.04%) volume of CO₂ that’s found in air is enough to disrupt this careful chemistry, producing unwanted lithium carbonate (Li₂CO₃). As many battery scientists will tell you, the presence of Li₂CO₃ can also be a real pain in regular lithium-ion batteries, causing unhelpful side reactions and electrical resistance. Nonetheless the scientists noticed something interesting about this CO₂ contamination: It improved the battery’s amount of charge. From this point on, work began on intentionally adding CO₂ gas to batteries to take advantage of this, and the lithium-CO₂ battery was born. How it works Their great potential relates to the chemical reaction at the positive side of the battery, where small holes are cut in the casing to allow CO₂ gas in. There it dissolves in the liquid electrolyte (which allows the charge to move between the two electrodes) and reacts with lithium that has already been dissolved there. During this reaction, it’s believed that four electrons are exchanged between lithium ions and carbon dioxide. This electron transfer determines the theoretical charge that can be stored in the battery. In a normal lithium-ion battery, the positive electrode exchanges just one electron per reaction. (In lithium air batteries, it’s two to four electrons.) The greater exchange of electrons in the lithium-carbon dioxide battery, combined with the high voltage of the reaction, explains their potential to greatly outperform today’s lithium-ion batteries. However, the technology has a few issues. The batteries don’t last very long. Commercial lithium-ion packs routinely survive 1,000 to 10,000 charging cycles; most LiCO₂ prototypes fade after fewer than 100. They’re also difficult to recharge. This requires breaking down the lithium carbonate to release lithium and CO₂, which can be energy intensive. This energy requirement is a little like a hill that must be cycled up before the reaction can coast, and is known as overpotential. You can reduce this requirement by printing the right catalyst material on the porous positive electrode. Yet these catalysts are typically expensive and rare noble metals, such as ruthenium and platinum, making for a significant barrier to commercial viability. Our team has found an alternative catalyst, caesium phosphomolybdate, which is far cheaper and easy to manufacture at room temperature. This material made the batteries stable for 107 cycles, while also storing 2.5 times as much charge as a lithium ion. And we significantly reduced the energy cost involved in breaking down lithium carbonate, for an overpotential of 0.67 volts, which is only about double what would be necessary in a commercial product. Our research team is now working to further reduce the cost of this technology by developing a catalyst that replaces caesium, since it’s the phosphomolybdate that is key. This could make the system more economically viable and scalable for widespread deployment. We also plan to study how the battery charges and discharges in real time. This will provide a clearer understanding of the internal mechanisms at work, helping to optimize performance and durability. A major focus of upcoming tests will be to evaluate how the battery performs under different CO₂ pressures. So far, the system has only been tested under idealized conditions (1 bar). If it can work at 0.1 bar of pressure, it will be feasible for car exhausts and gas boiler flues, meaning you could capture CO₂ while you drive or heat your home. Demonstrating that this works will be an important confirmation of commercial viability, albeit we would expect the battery’s charge capacity to reduce at this pressure. By our rough calculations, 1kg of catalyst could absorb around 18.5kg of CO₂. Since a car driving 100 miles emits around 18kg to 20kg of CO₂, that means such a battery could potentially offset a day’s drive. If the batteries work at 0.006 bar, the pressure on the Martian atmosphere, they could power anything from an exploration rover to a colony. At 0.0004 bar, Earth’s ambient air pressure, they could capture CO₂ from our atmosphere and store power anywhere. In all cases, the key question will be how it affects the battery’s charge capacity. Meanwhile, to improve the battery’s number of recharge cycles, we need to address the fact that the electrolyte dries out. We’re currently investigating solutions, which probably involve developing casings that only CO₂ can move into. As for reducing the energy required for the catalyst to work, it’s likely to require optimizing the battery’s geometry to maximize the reaction rate—and to introduce a flow of CO₂, comparable to how fuel cells work (typically by feeding in hydrogen and oxygen). If this continued work can push the battery’s cycle life above 1,000 cycles, cut overpotential below 0.3 V, and replace scarce elements entirely, commercial Li-CO₂ packs could become reality. Our experiments will determine just how versatile and far-reaching the battery’s applications might be, from carbon capture on Earth to powering missions on Mars. Daniel Commandeur is a Surrey Future Fellow at the School of Chemistry & Chemical Engineering at the University of Surrey. Mahsa Masoudi is a PhD researcher of chemical engineering at the University of Surrey. Siddharth Gadkari is a lecturer in chemical process engineering at the University of Surrey. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • In tiny rural towns, young entrepreneurs are using food to revitalize communities

    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery.

    Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem.

    As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods.

    One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods.

    Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America.

    View this post on Instagram A post shared by Hope & Main: Culinary IncubatorIn my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets.

    Why entrepreneurs are embracing rural life

    An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds.

    This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas.

    The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play.

    One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health.

    Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today.

    Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship.

    Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes.

    According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching billion.

    The new ‘AgriCulture’

    A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.”

    We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives.

    The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there.

    View this post on Instagram A post shared by Home Farm U-Pick & EventsA thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities.

    Governments can help rural food havens thrive

    When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises.

    We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food?

    West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread.Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship.

    For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal.

    View this post on Instagram A post shared by FEAST & FETTLEThe USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts.

    Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies.

    Dawn Thilmany is a professor of agricultural economics at Colorado State University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #tiny #rural #towns #young #entrepreneurs
    In tiny rural towns, young entrepreneurs are using food to revitalize communities
    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery. Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem. As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods. One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods. Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America. View this post on Instagram A post shared by Hope & Main: Culinary IncubatorIn my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets. Why entrepreneurs are embracing rural life An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds. This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas. The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play. One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health. Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today. Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship. Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes. According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching billion. The new ‘AgriCulture’ A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.” We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives. The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there. View this post on Instagram A post shared by Home Farm U-Pick & EventsA thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities. Governments can help rural food havens thrive When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises. We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food? West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread.Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship. For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal. View this post on Instagram A post shared by FEAST & FETTLEThe USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts. Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies. Dawn Thilmany is a professor of agricultural economics at Colorado State University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #tiny #rural #towns #young #entrepreneurs
    In tiny rural towns, young entrepreneurs are using food to revitalize communities
    www.fastcompany.com
    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery. Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem. As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods. One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods. Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America. View this post on Instagram A post shared by Hope & Main: Culinary Incubator (@hopemain) In my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets. Why entrepreneurs are embracing rural life An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds. This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas. The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play. One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health. Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today. Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship. Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes. According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching $14.2 billion. The new ‘AgriCulture’ A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.” We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives. The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there. View this post on Instagram A post shared by Home Farm U-Pick & Events (@homefarmfamily) A thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities. Governments can help rural food havens thrive When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises. We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food? West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread. [Photo By Susan L. Angstadt/MediaNews Group/Reading Eagle via Getty Images] Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship. For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal. View this post on Instagram A post shared by FEAST & FETTLE (@feastandfettle) The USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts. Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies. Dawn Thilmany is a professor of agricultural economics at Colorado State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Does Light Traveling Through Space Wear Out?

    Jarred Roberts, The Conversation

    Published May 25, 2025

    |

    Comments|

    The iconic Pinwheel Galaxy, located 25 million light-years away. Hubble Image: NASA, ESA, K. Kuntz, F. Bresolin, J. Trauger, J. Mould, Y.-H. Chuand STScI; CFHT Image: Canada-France-Hawaii Telescope/J.-C. Cuillandre/Coelum; NOAO Image: G. Jacoby, B. Bohannan, M. Hanna/NOAO/AURA/NSF

    My telescope, set up for astrophotography in my light-polluted San Diego backyard, was pointed at a galaxy unfathomably far from Earth. My wife, Cristina, walked up just as the first space photo streamed to my tablet. It sparkled on the screen in front of us. “That’s the Pinwheel galaxy,” I said. The name is derived from its shape–albeit this pinwheel contains about a trillion stars. The light from the Pinwheel traveled for 25 million years across the universe–about 150 quintillion miles–to get to my telescope. My wife wondered: “Doesn’t light get tired during such a long journey?” Her curiosity triggered a thought-provoking conversation about light. Ultimately, why doesn’t light wear out and lose energy over time?

    Let’s talk about light I am an astrophysicist, and one of the first things I learned in my studies is how light often behaves in ways that defy our intuitions. Light is electromagnetic radiation: basically, an electric wave and a magnetic wave coupled together and traveling through space-time. It has no mass. That point is critical because the mass of an object, whether a speck of dust or a spaceship, limits the top speed it can travel through space. But because light is massless, it’s able to reach the maximum speed limit in a vacuum–about 186,000 milesper second, or almost 6 trillion miles per year. Nothing traveling through space is faster. To put that into perspective: In the time it takes you to blink your eyes, a particle of light travels around the circumference of the Earth more than twice. As incredibly fast as that is, space is incredibly spread out. Light from the Sun, which is 93 million milesfrom Earth, takes just over eight minutes to reach us. In other words, the sunlight you see is eight minutes old. Alpha Centauri, the nearest star to us after the Sun, is 26 trillion miles away. So by the time you see it in the night sky, its light is just over four years old. Or, as astronomers say, it’s four light years away.

    Imagine–a trip around the world at the speed of light. With those enormous distances in mind, consider Cristina’s question: How can light travel across the universe and not slowly lose energy? Actually, some light does lose energy. This happens when it bounces off something, such as interstellar dust, and is scattered about. But most light just goes and goes, without colliding with anything. This is almost always the case because space is mostly empty–nothingness. So there’s nothing in the way. When light travels unimpeded, it loses no energy. It can maintain that 186,000-mile-per-second speed forever.

    It’s about time Here’s another concept: Picture yourself as an astronaut on board the International Space Station. You’re orbiting at 17,000 milesper hour. Compared with someone on Earth, your wristwatch will tick 0.01 seconds slower over one year. That’s an example of time dilation–time moving at different speeds under different conditions. If you’re moving really fast, or close to a large gravitational field, your clock will tick more slowly than someone moving slower than you, or who is further from a large gravitational field. To say it succinctly, time is relative.

    Even astronauts aboard the International Space Station experience time dilation, although the effect is extremely small. NASA Now consider that light is inextricably connected to time. Picture sitting on a photon, a fundamental particle of light; here, you’d experience maximum time dilation. Everyone on Earth would clock you at the speed of light, but from your reference frame, time would completely stop. That’s because the “clocks” measuring time are in two different places going vastly different speeds: the photon moving at the speed of light, and the comparatively slowpoke speed of Earth going around the Sun.

    What’s more, when you’re traveling at or close to the speed of light, the distance between where you are and where you’re going gets shorter. That is, space itself becomes more compact in the direction of motion–so the faster you can go, the shorter your journey has to be. In other words, for the photon, space gets squished. Which brings us back to my picture of the Pinwheel galaxy. From the photon’s perspective, a star within the galaxy emitted it, and then a single pixel in my backyard camera absorbed it, at exactly the same time. Because space is squished, to the photon the journey was infinitely fast and infinitely short, a tiny fraction of a second. But from our perspective on Earth, the photon left the galaxy 25 million years ago and traveled 25 million light years across space until it landed on my tablet in my backyard.

    And there, on a cool spring night, its stunning image inspired a delightful conversation between a nerdy scientist and his curious wife. Jarred Roberts, Project Scientist, University of California, San Diego. This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Daily Newsletter

    You May Also Like

    By

    Isaac Schultz

    Published January 31, 2025
    #does #light #traveling #through #space
    Does Light Traveling Through Space Wear Out?
    Jarred Roberts, The Conversation Published May 25, 2025 | Comments| The iconic Pinwheel Galaxy, located 25 million light-years away. Hubble Image: NASA, ESA, K. Kuntz, F. Bresolin, J. Trauger, J. Mould, Y.-H. Chuand STScI; CFHT Image: Canada-France-Hawaii Telescope/J.-C. Cuillandre/Coelum; NOAO Image: G. Jacoby, B. Bohannan, M. Hanna/NOAO/AURA/NSF My telescope, set up for astrophotography in my light-polluted San Diego backyard, was pointed at a galaxy unfathomably far from Earth. My wife, Cristina, walked up just as the first space photo streamed to my tablet. It sparkled on the screen in front of us. “That’s the Pinwheel galaxy,” I said. The name is derived from its shape–albeit this pinwheel contains about a trillion stars. The light from the Pinwheel traveled for 25 million years across the universe–about 150 quintillion miles–to get to my telescope. My wife wondered: “Doesn’t light get tired during such a long journey?” Her curiosity triggered a thought-provoking conversation about light. Ultimately, why doesn’t light wear out and lose energy over time? Let’s talk about light I am an astrophysicist, and one of the first things I learned in my studies is how light often behaves in ways that defy our intuitions. Light is electromagnetic radiation: basically, an electric wave and a magnetic wave coupled together and traveling through space-time. It has no mass. That point is critical because the mass of an object, whether a speck of dust or a spaceship, limits the top speed it can travel through space. But because light is massless, it’s able to reach the maximum speed limit in a vacuum–about 186,000 milesper second, or almost 6 trillion miles per year. Nothing traveling through space is faster. To put that into perspective: In the time it takes you to blink your eyes, a particle of light travels around the circumference of the Earth more than twice. As incredibly fast as that is, space is incredibly spread out. Light from the Sun, which is 93 million milesfrom Earth, takes just over eight minutes to reach us. In other words, the sunlight you see is eight minutes old. Alpha Centauri, the nearest star to us after the Sun, is 26 trillion miles away. So by the time you see it in the night sky, its light is just over four years old. Or, as astronomers say, it’s four light years away. Imagine–a trip around the world at the speed of light. With those enormous distances in mind, consider Cristina’s question: How can light travel across the universe and not slowly lose energy? Actually, some light does lose energy. This happens when it bounces off something, such as interstellar dust, and is scattered about. But most light just goes and goes, without colliding with anything. This is almost always the case because space is mostly empty–nothingness. So there’s nothing in the way. When light travels unimpeded, it loses no energy. It can maintain that 186,000-mile-per-second speed forever. It’s about time Here’s another concept: Picture yourself as an astronaut on board the International Space Station. You’re orbiting at 17,000 milesper hour. Compared with someone on Earth, your wristwatch will tick 0.01 seconds slower over one year. That’s an example of time dilation–time moving at different speeds under different conditions. If you’re moving really fast, or close to a large gravitational field, your clock will tick more slowly than someone moving slower than you, or who is further from a large gravitational field. To say it succinctly, time is relative. Even astronauts aboard the International Space Station experience time dilation, although the effect is extremely small. NASA Now consider that light is inextricably connected to time. Picture sitting on a photon, a fundamental particle of light; here, you’d experience maximum time dilation. Everyone on Earth would clock you at the speed of light, but from your reference frame, time would completely stop. That’s because the “clocks” measuring time are in two different places going vastly different speeds: the photon moving at the speed of light, and the comparatively slowpoke speed of Earth going around the Sun. What’s more, when you’re traveling at or close to the speed of light, the distance between where you are and where you’re going gets shorter. That is, space itself becomes more compact in the direction of motion–so the faster you can go, the shorter your journey has to be. In other words, for the photon, space gets squished. Which brings us back to my picture of the Pinwheel galaxy. From the photon’s perspective, a star within the galaxy emitted it, and then a single pixel in my backyard camera absorbed it, at exactly the same time. Because space is squished, to the photon the journey was infinitely fast and infinitely short, a tiny fraction of a second. But from our perspective on Earth, the photon left the galaxy 25 million years ago and traveled 25 million light years across space until it landed on my tablet in my backyard. And there, on a cool spring night, its stunning image inspired a delightful conversation between a nerdy scientist and his curious wife. Jarred Roberts, Project Scientist, University of California, San Diego. This article is republished from The Conversation under a Creative Commons license. Read the original article. Daily Newsletter You May Also Like By Isaac Schultz Published January 31, 2025 #does #light #traveling #through #space
    Does Light Traveling Through Space Wear Out?
    gizmodo.com
    Jarred Roberts, The Conversation Published May 25, 2025 | Comments (1) | The iconic Pinwheel Galaxy, located 25 million light-years away. Hubble Image: NASA, ESA, K. Kuntz (JHU), F. Bresolin (University of Hawaii), J. Trauger (Jet Propulsion Lab), J. Mould (NOAO), Y.-H. Chu (University of Illinois, Urbana) and STScI; CFHT Image: Canada-France-Hawaii Telescope/J.-C. Cuillandre/Coelum; NOAO Image: G. Jacoby, B. Bohannan, M. Hanna/NOAO/AURA/NSF My telescope, set up for astrophotography in my light-polluted San Diego backyard, was pointed at a galaxy unfathomably far from Earth. My wife, Cristina, walked up just as the first space photo streamed to my tablet. It sparkled on the screen in front of us. “That’s the Pinwheel galaxy,” I said. The name is derived from its shape–albeit this pinwheel contains about a trillion stars. The light from the Pinwheel traveled for 25 million years across the universe–about 150 quintillion miles–to get to my telescope. My wife wondered: “Doesn’t light get tired during such a long journey?” Her curiosity triggered a thought-provoking conversation about light. Ultimately, why doesn’t light wear out and lose energy over time? Let’s talk about light I am an astrophysicist, and one of the first things I learned in my studies is how light often behaves in ways that defy our intuitions. Light is electromagnetic radiation: basically, an electric wave and a magnetic wave coupled together and traveling through space-time. It has no mass. That point is critical because the mass of an object, whether a speck of dust or a spaceship, limits the top speed it can travel through space. But because light is massless, it’s able to reach the maximum speed limit in a vacuum–about 186,000 miles (300,000 kilometers) per second, or almost 6 trillion miles per year (9.6 trillion kilometers). Nothing traveling through space is faster. To put that into perspective: In the time it takes you to blink your eyes, a particle of light travels around the circumference of the Earth more than twice. As incredibly fast as that is, space is incredibly spread out. Light from the Sun, which is 93 million miles (about 150 million kilometers) from Earth, takes just over eight minutes to reach us. In other words, the sunlight you see is eight minutes old. Alpha Centauri, the nearest star to us after the Sun, is 26 trillion miles away (about 41 trillion kilometers). So by the time you see it in the night sky, its light is just over four years old. Or, as astronomers say, it’s four light years away. Imagine–a trip around the world at the speed of light. With those enormous distances in mind, consider Cristina’s question: How can light travel across the universe and not slowly lose energy? Actually, some light does lose energy. This happens when it bounces off something, such as interstellar dust, and is scattered about. But most light just goes and goes, without colliding with anything. This is almost always the case because space is mostly empty–nothingness. So there’s nothing in the way. When light travels unimpeded, it loses no energy. It can maintain that 186,000-mile-per-second speed forever. It’s about time Here’s another concept: Picture yourself as an astronaut on board the International Space Station. You’re orbiting at 17,000 miles (about 27,000 kilometers) per hour. Compared with someone on Earth, your wristwatch will tick 0.01 seconds slower over one year. That’s an example of time dilation–time moving at different speeds under different conditions. If you’re moving really fast, or close to a large gravitational field, your clock will tick more slowly than someone moving slower than you, or who is further from a large gravitational field. To say it succinctly, time is relative. Even astronauts aboard the International Space Station experience time dilation, although the effect is extremely small. NASA Now consider that light is inextricably connected to time. Picture sitting on a photon, a fundamental particle of light; here, you’d experience maximum time dilation. Everyone on Earth would clock you at the speed of light, but from your reference frame, time would completely stop. That’s because the “clocks” measuring time are in two different places going vastly different speeds: the photon moving at the speed of light, and the comparatively slowpoke speed of Earth going around the Sun. What’s more, when you’re traveling at or close to the speed of light, the distance between where you are and where you’re going gets shorter. That is, space itself becomes more compact in the direction of motion–so the faster you can go, the shorter your journey has to be. In other words, for the photon, space gets squished. Which brings us back to my picture of the Pinwheel galaxy. From the photon’s perspective, a star within the galaxy emitted it, and then a single pixel in my backyard camera absorbed it, at exactly the same time. Because space is squished, to the photon the journey was infinitely fast and infinitely short, a tiny fraction of a second. But from our perspective on Earth, the photon left the galaxy 25 million years ago and traveled 25 million light years across space until it landed on my tablet in my backyard. And there, on a cool spring night, its stunning image inspired a delightful conversation between a nerdy scientist and his curious wife. Jarred Roberts, Project Scientist, University of California, San Diego. This article is republished from The Conversation under a Creative Commons license. Read the original article. Daily Newsletter You May Also Like By Isaac Schultz Published January 31, 2025
    0 Kommentare ·0 Geteilt ·0 Bewertungen
Weitere Ergebnisse
CGShares https://cgshares.com