• What in the world are we doing? Scientists at the Massachusetts Institute of Technology have come up with this mind-boggling idea of creating an AI model that "never stops learning." Seriously? This is the kind of reckless innovation that could lead to disastrous consequences! Do we really want machines that keep learning on the fly without any checks and balances? Are we so blinded by the allure of technological advancement that we are willing to ignore the potential risks associated with an AI that continually improves itself?

    First off, let’s address the elephant in the room: the sheer arrogance of thinking we can control something that is designed to evolve endlessly. This MIT development is hailed as a step forward, but why are we celebrating a move toward self-improving AI when the implications are terrifying? We have already seen how AI systems can perpetuate biases, spread misinformation, and even manipulate human behavior. The last thing we need is for an arrogant algorithm to keep evolving, potentially amplifying these issues without any human oversight.

    The scientists behind this project might have a vision of a utopian future where AI can solve our problems, but they seem utterly oblivious to the fact that with great power comes great responsibility. Who is going to regulate this relentless learning process? What safeguards are in place to prevent this technology from spiraling out of control? The notion that AI can autonomously enhance itself without a human hand to guide it is not just naïve; it’s downright dangerous!

    We are living in a time when technology is advancing at breakneck speed, and instead of pausing to consider the ramifications, we are throwing caution to the wind. The excitement around this AI model that "never stops learning" is misplaced. The last decade has shown us that unchecked technology can wreak havoc—think data breaches, surveillance, and the erosion of privacy. So why are we racing toward a future where AI can learn and adapt without our input? Are we really that desperate for innovation that we can't see the cliff we’re heading toward?

    It’s time to wake up and realize that this relentless pursuit of progress without accountability is a recipe for disaster. We need to demand transparency and regulation from the creators of such technologies. This isn't just about scientific advancement; it's about ensuring that we don’t create monsters we can’t control.

    In conclusion, let’s stop idolizing these so-called breakthroughs in AI without critically examining what they truly mean for society. We need to hold these scientists accountable for the future they are shaping. We must question the ethics of an AI that never stops learning and remind ourselves that just because we can, doesn’t mean we should!

    #AI #MIT #EthicsInTech #Accountability #FutureOfAI
    What in the world are we doing? Scientists at the Massachusetts Institute of Technology have come up with this mind-boggling idea of creating an AI model that "never stops learning." Seriously? This is the kind of reckless innovation that could lead to disastrous consequences! Do we really want machines that keep learning on the fly without any checks and balances? Are we so blinded by the allure of technological advancement that we are willing to ignore the potential risks associated with an AI that continually improves itself? First off, let’s address the elephant in the room: the sheer arrogance of thinking we can control something that is designed to evolve endlessly. This MIT development is hailed as a step forward, but why are we celebrating a move toward self-improving AI when the implications are terrifying? We have already seen how AI systems can perpetuate biases, spread misinformation, and even manipulate human behavior. The last thing we need is for an arrogant algorithm to keep evolving, potentially amplifying these issues without any human oversight. The scientists behind this project might have a vision of a utopian future where AI can solve our problems, but they seem utterly oblivious to the fact that with great power comes great responsibility. Who is going to regulate this relentless learning process? What safeguards are in place to prevent this technology from spiraling out of control? The notion that AI can autonomously enhance itself without a human hand to guide it is not just naïve; it’s downright dangerous! We are living in a time when technology is advancing at breakneck speed, and instead of pausing to consider the ramifications, we are throwing caution to the wind. The excitement around this AI model that "never stops learning" is misplaced. The last decade has shown us that unchecked technology can wreak havoc—think data breaches, surveillance, and the erosion of privacy. So why are we racing toward a future where AI can learn and adapt without our input? Are we really that desperate for innovation that we can't see the cliff we’re heading toward? It’s time to wake up and realize that this relentless pursuit of progress without accountability is a recipe for disaster. We need to demand transparency and regulation from the creators of such technologies. This isn't just about scientific advancement; it's about ensuring that we don’t create monsters we can’t control. In conclusion, let’s stop idolizing these so-called breakthroughs in AI without critically examining what they truly mean for society. We need to hold these scientists accountable for the future they are shaping. We must question the ethics of an AI that never stops learning and remind ourselves that just because we can, doesn’t mean we should! #AI #MIT #EthicsInTech #Accountability #FutureOfAI
    This AI Model Never Stops Learning
    Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.
    Like
    Love
    Wow
    Sad
    Angry
    340
    1 Comments 0 Shares
  • Trump’s military parade is a warning

    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    #trumpampamp8217s #military #parade #warning
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics #trumpampamp8217s #military #parade #warning
    WWW.VOX.COM
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics (even though Trump actually got the idea after attending the 2017 Bastille Day parade in Paris).Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College (speaking not for the military but in a personal capacity).That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocratic (and even questionably legal) activities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor (also speaking personally). “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actually [a deployment to] a blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    0 Comments 0 Shares
  • From Networks to Business Models, AI Is Rewiring Telecom

    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services.
    As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry.
    Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental.
    AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint.
    AI Is Reshaping Wireless Networks Already
    Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack.
    AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models.
    Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time.
    AI Acceleration Will Outpace Past Tech Shifts
    Many may underestimate the speed and magnitude of AI-driven change.
    The shift from traditional voice and data systems to AI-driven network intelligence is already underway.
    Although predictions abound, the true scope remains unclear.
    It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise.

    Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board.
    Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined.
    History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries.
    Technological shifts bring both new opportunities and complex trade-offs.
    AI Disruption Will Move Faster Than Ever
    The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed.
    Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway.
    As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other.
    Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward.
    AI Will Reshape All Sectors and Companies
    This shift will unfold faster than most organizations or individuals are prepared to handle.
    Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries.
    Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption.
    Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage.
    As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name.

    It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries.
    SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption.
    The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers.
    Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives.
    No Industry Is Immune From AI Disruption
    AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp.
    New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent.
    Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets.
    The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
    #networks #business #models #rewiring #telecom
    From Networks to Business Models, AI Is Rewiring Telecom
    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow. #networks #business #models #rewiring #telecom
    From Networks to Business Models, AI Is Rewiring Telecom
    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access (FWA), and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
    0 Comments 0 Shares
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Comments 0 Shares
  • Managers rethink ecological scenarios as threats rise amid climate change

    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate.

    Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes.

    The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely.

    To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform.

    As ecologists and a climate scientist, we’re helping them figure out how to do that.

    Managing changing ecosystems

    Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically.

    However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions.

    Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways.

    When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict.

    To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible?

    Preparing for multiple possibilities

    At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future.

    It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected.

    In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees.

    While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires.

    The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely.

    For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run.

    Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices.

    Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon.

    Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios

    To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies.

    We identified three key ingredients for constructing credible ecological scenarios:

    1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes.

    2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future.

    3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely.

    Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change.

    What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight.

    Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems.

    Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder.

    Brian W. Miller is a research ecologist at the U.S. Geological Survey.

    Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #managers #rethink #ecological #scenarios #threats
    Managers rethink ecological scenarios as threats rise amid climate change
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article. #managers #rethink #ecological #scenarios #threats
    WWW.FASTCOMPANY.COM
    Managers rethink ecological scenarios as threats rise amid climate change
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. [Photos: T. Walz, M. Lavin, C. Helzer, O. Richmond, NPS (top to bottom)., CC BY] Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comments 0 Shares
  • At the Projective Territories Symposium, domesticity, density, and form emerge as key ideas for addressing the climate crisis

    A small home in Wayne County, Missouri was torn apart by a tornado.
    An aerial image by Jeff Roberson taken on March 15 depicts chunks of stick-framed walls and half-recognizable debris strewn across a patchy lawn in an eviscerated orthography of middle-American life. Elisa Iturbe, assistant professor of Architecture at Harvard’s Graduate School of Design, describes this scene as “an image of climate impact, climate victimhood…these walls are doing the hard work of containment, of containing the rituals of human lifestyle.”

    Roberson’s image embodied the themes that emerged from the Projective Territories Symposium: The atomized fragility of contemporary American domesticity, the fundamental link between ways of living and modes of land tenure, and the necessary primacy of form in architecture’s response to the incoming upheaval of climate change.
    Lydia Kallipoliti talked about her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia.Projective Territories was hosted at Kent State University’s College of Architecture and Environmental Design on April 3 and 4. Organized and led by the CAED’s assistant professor Paul Mosley, the symposium brought Iturbe, Columbia University’s associate professor Lydia Kallipoliti, California College of the Arts’ associate professor Neeraj Bhatia, and professor Albert Pope of Rice University to Kent, Ohio, to discuss the relationship between territory and architecture in the face of climate change.
    “At its core, territory is land altered by human inhabitation,” read Mosley’s synopsis. “If ensuring a survivable future means rethinking realities of social organization, economy, and subsistence, then how might architecture—as a way of thinking and rethinking the world—contribute to these new realities?”

    Projective Territories kicked off on the afternoon of April 3 with a discussion of Bhatia’s Life After Property exhibition hosted at the CAED’s Armstrong Gallery. The exhibition collected drawings, renderings, and models by Bhatia’s practice The Open Workshop on a puzzle-piece shaped table constructed from plywood and painted blue. Nestled into the table’s geometric subtractions, Bhatia, Pope, Mosley, and CAED associate professor Taraneh Meshkani discussed Bhatia’s research into the commons: A system of land tenure by which communities manage and share resources with minimal reliance on the state through an ethic of solidarity, mutualism, and reciprocity.
    Neeraj Bhatia presented new typologies for collective living.The symposium’s second day was organized into a morning session, “The Erosion of Territory,” with lectures by Kallipoliti and Iturbe, and an afternoon session, “The Architecture of Expanding Ecologies,” with lectures by Bhatia and Pope.
    Mosley’s introduction to “The Erosion of Territory” situated Kallipoliti and Iturbe’s work in a discussion about “how territories have been historically shaped by extraction and control and are unraveling under strain.”

    Lydia Kallipoliti’s lecture “Ecological Design; Cohabiting the World” presented questions raised by her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia, which she described as “an attempt to clarify how nature as a concept was used in history.” Kallipoliti proposed an ecological model that projects outward from domestic interiors to the world to generate a “universe of fragmented worldviews and a cloud of stories.” Iturbe’s “Transgressing Immutable Lines” centered on her research into the formal potentials for Community Land Trusts—nonprofits that own buildings in trust on existing real estate. Iturbe described these trusts as “Not just a juridical mechanism, but a proposal for rewriting the relationship between land and people.”
    “Ecology is the basis for a more pleasurable alternative,” said Mosley in his introduction to the day’s second session. “Cooperation and care aren’t the goals, but the means of happiness.”
    An exhibition complementing the symposium shared drawings, renderings, and models.Neeraj Bhatia’s lecture “Life After Property” complemented the previous days’ exhibition, problematizing the housing crisis as an ideological commitment to housing rooted in market speculation. Bhatia presented new typologies for collective living with the flexibility to formally stabilize the interpersonal relationships that define life in the commons. Albert Pope finished the day’s lectures with “Inverse Utopia,” presenting work from his 2024 book of the same name, which problematizes postwar American urban sprawl as an incapability to visualize the vast horizontal expansion of low-density development.
    Collectively, the day’s speakers outlined a model that situated the American domestic form at the center of the global climate crisis. Demanding complete separation from productive territories, this formal ideology of the isolated object is in a process of active dismemberment under climate change. The speakers’ proposed solutions were unified under fresh considerations of established ideas of typology and form, directly engaging politics of the collective as an input for shaping existing space. As Friday’s session drew to a close, the single-family home appeared as a primitive relic which architecture must overcome. Albert Pope’s images of tower complexes in Hong Kong and council estates in London that house thousands appeared as visions of the future.
    “The only way we can begin to address this dilemma is to begin to understand who we are in order to enlist the kinds of collective responses to this problem,” said Pope.
    Walker MacMurdo is an architectural designer, critic, and adjunct professor who studies the relationship between architecture and the ground at Kent State University’s College of Architecture and Environmental Design.
    #projective #territories #symposium #domesticity #density
    At the Projective Territories Symposium, domesticity, density, and form emerge as key ideas for addressing the climate crisis
    A small home in Wayne County, Missouri was torn apart by a tornado. An aerial image by Jeff Roberson taken on March 15 depicts chunks of stick-framed walls and half-recognizable debris strewn across a patchy lawn in an eviscerated orthography of middle-American life. Elisa Iturbe, assistant professor of Architecture at Harvard’s Graduate School of Design, describes this scene as “an image of climate impact, climate victimhood…these walls are doing the hard work of containment, of containing the rituals of human lifestyle.” Roberson’s image embodied the themes that emerged from the Projective Territories Symposium: The atomized fragility of contemporary American domesticity, the fundamental link between ways of living and modes of land tenure, and the necessary primacy of form in architecture’s response to the incoming upheaval of climate change. Lydia Kallipoliti talked about her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia.Projective Territories was hosted at Kent State University’s College of Architecture and Environmental Design on April 3 and 4. Organized and led by the CAED’s assistant professor Paul Mosley, the symposium brought Iturbe, Columbia University’s associate professor Lydia Kallipoliti, California College of the Arts’ associate professor Neeraj Bhatia, and professor Albert Pope of Rice University to Kent, Ohio, to discuss the relationship between territory and architecture in the face of climate change. “At its core, territory is land altered by human inhabitation,” read Mosley’s synopsis. “If ensuring a survivable future means rethinking realities of social organization, economy, and subsistence, then how might architecture—as a way of thinking and rethinking the world—contribute to these new realities?” Projective Territories kicked off on the afternoon of April 3 with a discussion of Bhatia’s Life After Property exhibition hosted at the CAED’s Armstrong Gallery. The exhibition collected drawings, renderings, and models by Bhatia’s practice The Open Workshop on a puzzle-piece shaped table constructed from plywood and painted blue. Nestled into the table’s geometric subtractions, Bhatia, Pope, Mosley, and CAED associate professor Taraneh Meshkani discussed Bhatia’s research into the commons: A system of land tenure by which communities manage and share resources with minimal reliance on the state through an ethic of solidarity, mutualism, and reciprocity. Neeraj Bhatia presented new typologies for collective living.The symposium’s second day was organized into a morning session, “The Erosion of Territory,” with lectures by Kallipoliti and Iturbe, and an afternoon session, “The Architecture of Expanding Ecologies,” with lectures by Bhatia and Pope. Mosley’s introduction to “The Erosion of Territory” situated Kallipoliti and Iturbe’s work in a discussion about “how territories have been historically shaped by extraction and control and are unraveling under strain.” Lydia Kallipoliti’s lecture “Ecological Design; Cohabiting the World” presented questions raised by her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia, which she described as “an attempt to clarify how nature as a concept was used in history.” Kallipoliti proposed an ecological model that projects outward from domestic interiors to the world to generate a “universe of fragmented worldviews and a cloud of stories.” Iturbe’s “Transgressing Immutable Lines” centered on her research into the formal potentials for Community Land Trusts—nonprofits that own buildings in trust on existing real estate. Iturbe described these trusts as “Not just a juridical mechanism, but a proposal for rewriting the relationship between land and people.” “Ecology is the basis for a more pleasurable alternative,” said Mosley in his introduction to the day’s second session. “Cooperation and care aren’t the goals, but the means of happiness.” An exhibition complementing the symposium shared drawings, renderings, and models.Neeraj Bhatia’s lecture “Life After Property” complemented the previous days’ exhibition, problematizing the housing crisis as an ideological commitment to housing rooted in market speculation. Bhatia presented new typologies for collective living with the flexibility to formally stabilize the interpersonal relationships that define life in the commons. Albert Pope finished the day’s lectures with “Inverse Utopia,” presenting work from his 2024 book of the same name, which problematizes postwar American urban sprawl as an incapability to visualize the vast horizontal expansion of low-density development. Collectively, the day’s speakers outlined a model that situated the American domestic form at the center of the global climate crisis. Demanding complete separation from productive territories, this formal ideology of the isolated object is in a process of active dismemberment under climate change. The speakers’ proposed solutions were unified under fresh considerations of established ideas of typology and form, directly engaging politics of the collective as an input for shaping existing space. As Friday’s session drew to a close, the single-family home appeared as a primitive relic which architecture must overcome. Albert Pope’s images of tower complexes in Hong Kong and council estates in London that house thousands appeared as visions of the future. “The only way we can begin to address this dilemma is to begin to understand who we are in order to enlist the kinds of collective responses to this problem,” said Pope. Walker MacMurdo is an architectural designer, critic, and adjunct professor who studies the relationship between architecture and the ground at Kent State University’s College of Architecture and Environmental Design. #projective #territories #symposium #domesticity #density
    WWW.ARCHPAPER.COM
    At the Projective Territories Symposium, domesticity, density, and form emerge as key ideas for addressing the climate crisis
    A small home in Wayne County, Missouri was torn apart by a tornado. An aerial image by Jeff Roberson taken on March 15 depicts chunks of stick-framed walls and half-recognizable debris strewn across a patchy lawn in an eviscerated orthography of middle-American life. Elisa Iturbe, assistant professor of Architecture at Harvard’s Graduate School of Design, describes this scene as “an image of climate impact, climate victimhood…these walls are doing the hard work of containment, of containing the rituals of human lifestyle.” Roberson’s image embodied the themes that emerged from the Projective Territories Symposium: The atomized fragility of contemporary American domesticity, the fundamental link between ways of living and modes of land tenure, and the necessary primacy of form in architecture’s response to the incoming upheaval of climate change. Lydia Kallipoliti talked about her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia. (Andy Eichler) Projective Territories was hosted at Kent State University’s College of Architecture and Environmental Design on April 3 and 4. Organized and led by the CAED’s assistant professor Paul Mosley, the symposium brought Iturbe, Columbia University’s associate professor Lydia Kallipoliti, California College of the Arts’ associate professor Neeraj Bhatia, and professor Albert Pope of Rice University to Kent, Ohio, to discuss the relationship between territory and architecture in the face of climate change. “At its core, territory is land altered by human inhabitation,” read Mosley’s synopsis. “If ensuring a survivable future means rethinking realities of social organization, economy, and subsistence, then how might architecture—as a way of thinking and rethinking the world—contribute to these new realities?” Projective Territories kicked off on the afternoon of April 3 with a discussion of Bhatia’s Life After Property exhibition hosted at the CAED’s Armstrong Gallery. The exhibition collected drawings, renderings, and models by Bhatia’s practice The Open Workshop on a puzzle-piece shaped table constructed from plywood and painted blue. Nestled into the table’s geometric subtractions, Bhatia, Pope, Mosley, and CAED associate professor Taraneh Meshkani discussed Bhatia’s research into the commons: A system of land tenure by which communities manage and share resources with minimal reliance on the state through an ethic of solidarity, mutualism, and reciprocity. Neeraj Bhatia presented new typologies for collective living. (Andy Eichler) The symposium’s second day was organized into a morning session, “The Erosion of Territory,” with lectures by Kallipoliti and Iturbe, and an afternoon session, “The Architecture of Expanding Ecologies,” with lectures by Bhatia and Pope. Mosley’s introduction to “The Erosion of Territory” situated Kallipoliti and Iturbe’s work in a discussion about “how territories have been historically shaped by extraction and control and are unraveling under strain.” Lydia Kallipoliti’s lecture “Ecological Design; Cohabiting the World” presented questions raised by her 2024 book Histories of Ecological Design; An Unfinished Cyclopedia, which she described as “an attempt to clarify how nature as a concept was used in history.” Kallipoliti proposed an ecological model that projects outward from domestic interiors to the world to generate a “universe of fragmented worldviews and a cloud of stories.” Iturbe’s “Transgressing Immutable Lines” centered on her research into the formal potentials for Community Land Trusts—nonprofits that own buildings in trust on existing real estate. Iturbe described these trusts as “Not just a juridical mechanism, but a proposal for rewriting the relationship between land and people.” “Ecology is the basis for a more pleasurable alternative,” said Mosley in his introduction to the day’s second session. “Cooperation and care aren’t the goals, but the means of happiness.” An exhibition complementing the symposium shared drawings, renderings, and models. (Andy Eichler) Neeraj Bhatia’s lecture “Life After Property” complemented the previous days’ exhibition, problematizing the housing crisis as an ideological commitment to housing rooted in market speculation. Bhatia presented new typologies for collective living with the flexibility to formally stabilize the interpersonal relationships that define life in the commons. Albert Pope finished the day’s lectures with “Inverse Utopia,” presenting work from his 2024 book of the same name, which problematizes postwar American urban sprawl as an incapability to visualize the vast horizontal expansion of low-density development. Collectively, the day’s speakers outlined a model that situated the American domestic form at the center of the global climate crisis. Demanding complete separation from productive territories, this formal ideology of the isolated object is in a process of active dismemberment under climate change. The speakers’ proposed solutions were unified under fresh considerations of established ideas of typology and form, directly engaging politics of the collective as an input for shaping existing space. As Friday’s session drew to a close, the single-family home appeared as a primitive relic which architecture must overcome. Albert Pope’s images of tower complexes in Hong Kong and council estates in London that house thousands appeared as visions of the future. “The only way we can begin to address this dilemma is to begin to understand who we are in order to enlist the kinds of collective responses to this problem,” said Pope. Walker MacMurdo is an architectural designer, critic, and adjunct professor who studies the relationship between architecture and the ground at Kent State University’s College of Architecture and Environmental Design.
    0 Comments 0 Shares
  • Something remarkable is happening with violent crime rates in the US

    The astounding drop in violent crime that began in the 1990s and extended through the mid-2010s is one of the most important — and most underappreciated — good news stories of recent memory. That made its reversal during the pandemic so worrying.In the first full year of the pandemic, the FBI tallied 22,134 murders nationwide, up from 16,669 in 2019 — an increase of roughly 34 percent, the sharpest one-year rise in modern crime record-keeping. In 2021, Philadelphia alone recorded a record 562 homicides, while Baltimore experienced a near-record 337 murders. Between 2019 and 2020, the average number of weekly emergency department visits for gunshots increased by 37 percent, and largely stayed high through the following year. By the 2024 election, for the first time in awhile, violent crime was a major political issue in the US. A Pew survey that year found that 58 percent of Americans believed crime should be a top priority for the president and Congress, up from 47 percent in 2021. And yet even as the presidential campaign was unfolding, the violent crime spike of the pandemic had already subsided — and crime rates have kept dropping. The FBI’s 2023 crime report found that murder was down nearly 12 percent year over year, and in 2024 it kept falling to roughly 16,700 murders, on par with pre-pandemic levels. The early numbers for 2025 are so promising that Jeff Asher, one of the best independent analysts on crime, recently asked in a piece whether this year could have the lowest murder rate in US history.All of which raises two questions: What’s driving a decrease in crime every bit as sharp as the pandemic-era increase? And why do so many of us find it so hard to believe?The crime wave crashesWe shouldn’t jump to conclusions about this year’s crime rates based on the early data, especially since we’re just now beginning the summer, when violent crime almost always rises. Crime data in the US is also patchy and slow — I can tell you how many soybeans the US raised in March, but I can’t tell you how many people have been murdered in the US this year. But what we can tell looks very good. The Real-Time Crime Index, an academic project that collects crime data from more than 380 police agencies covering nearly 100 million people, estimates there were 1,488 murders in the US this year through March, compared to an estimated 1,899 over the same months last year. That’s a decrease of nearly 22 percent. Violent crime overall is down by about 11 percent. Motor vehicle theft, which became an epidemic during the pandemic, is down by over 26 percent. Peer down to the local level, and the picture just keeps getting better. In Baltimore, which The Wire made synonymous with violent, drug-related crime, homicides fell to 199 last year, its best showing in over a decade. As of early May, the city had 45 murders, down another third from the same period last year. City emergency rooms that were once full of gunshot victims have gone quiet.How much lower could it go nationally? The record low homicide rate, at least since national records started being kept in 1960, is 4.45 per 100,000 in 2014. So far this year, according to Asher, murder is down in 25 of the 30 cities that reported the most murders in 2023. Asher argues that if the numbers hold, “a 10 percent or more decline in murder nationally in 2025 would roughly tie 2014 for the lowest murder rate ever recorded.”What’s behind the drop?In short: The pandemic led to a huge increase in violent crime, and as the pandemic waned, so did the wave.The closure of schools during the pandemic, especially in already higher-crime cities in the Northeast, meant far more young men — who are statistically more likely to be either perpetrators of violent crime or victims of it — on the streets. The closure of social services left fewer resources for them to draw on; and the sheer stress of a once-in-a-lifetime health catastrophe set everyone on edge. The murder of George Floyd in spring 2020 led to a collapse in community trust in policing, which in turn seemed to lead to less aggressive policing altogether. As the pandemic eased, though, those buffers came back, providing a natural brake on violent crime.But the government, from the national level down to cities, also took direct actions to stem the flood of violence. The White House under President Joe Biden poured hundreds of millions of dollars into community violence interruption programs, which aim to break the cycle of retribution that can lead to homicide. Baltimore’s Group Violence Reduction Strategy has brought together community groups and law enforcement to deter the people considered most likely to get involved in gun violence. And the erosion in police forces nationwide that occurred during the pandemic has largely stopped. The situation is far from perfect. Even though Floyd’s murder triggered a nationwide reckoning around police violence, recent data shows that police killings kept increasing, in part because fear of crime often stopped momentum around reforms. Here in New York, even as overall crime on the subways has fallen to historical lows, felony assaults on the trains have kept rising, fueling fears of lawlessness. Why can’t we believe it?As Memorial Day weekend marks the start of summer, the next few months will tell whether the pandemic was truly just a blip in the long-term reduction in violent crime. But what we can say is most people don’t seem to notice the positive trends. An October 2024 poll by Gallup found that 64 percent of Americans believed there was more crime nationwide than the year before, even though by that time in 2024, the post-pandemic crime drop was well under way. But such results aren’t surprising. One of the most reliable results in polling is that if you ask Americans whether crime is rising, they’ll say yes. Astonishingly, in 23 of 27 national surveys done by Gallup since 1993, Americans reported that they thought crime nationwide was rising — even though most of those surveys were done during the long crime decline. Crime is one of the best examples we have of bad news bias. By definition, a murder is an outlier event that grabs our attention, inevitably leading the nightly local news. Sometimes, as during the pandemic, that bias can match reality. But if we fail to adjust to what is actually happening around us — not just what we think is happening — it won’t just make us think our cities are more dangerous than they really are. It’ll sap energy for the reforms that can really make a difference. A version of this story originally appeared in the Good News newsletter. Sign up here!You’ve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you — join us.Swati SharmaVox Editor-in-ChiefSee More:
    #something #remarkable #happening #with #violent
    Something remarkable is happening with violent crime rates in the US
    The astounding drop in violent crime that began in the 1990s and extended through the mid-2010s is one of the most important — and most underappreciated — good news stories of recent memory. That made its reversal during the pandemic so worrying.In the first full year of the pandemic, the FBI tallied 22,134 murders nationwide, up from 16,669 in 2019 — an increase of roughly 34 percent, the sharpest one-year rise in modern crime record-keeping. In 2021, Philadelphia alone recorded a record 562 homicides, while Baltimore experienced a near-record 337 murders. Between 2019 and 2020, the average number of weekly emergency department visits for gunshots increased by 37 percent, and largely stayed high through the following year. By the 2024 election, for the first time in awhile, violent crime was a major political issue in the US. A Pew survey that year found that 58 percent of Americans believed crime should be a top priority for the president and Congress, up from 47 percent in 2021. And yet even as the presidential campaign was unfolding, the violent crime spike of the pandemic had already subsided — and crime rates have kept dropping. The FBI’s 2023 crime report found that murder was down nearly 12 percent year over year, and in 2024 it kept falling to roughly 16,700 murders, on par with pre-pandemic levels. The early numbers for 2025 are so promising that Jeff Asher, one of the best independent analysts on crime, recently asked in a piece whether this year could have the lowest murder rate in US history.All of which raises two questions: What’s driving a decrease in crime every bit as sharp as the pandemic-era increase? And why do so many of us find it so hard to believe?The crime wave crashesWe shouldn’t jump to conclusions about this year’s crime rates based on the early data, especially since we’re just now beginning the summer, when violent crime almost always rises. Crime data in the US is also patchy and slow — I can tell you how many soybeans the US raised in March, but I can’t tell you how many people have been murdered in the US this year. But what we can tell looks very good. The Real-Time Crime Index, an academic project that collects crime data from more than 380 police agencies covering nearly 100 million people, estimates there were 1,488 murders in the US this year through March, compared to an estimated 1,899 over the same months last year. That’s a decrease of nearly 22 percent. Violent crime overall is down by about 11 percent. Motor vehicle theft, which became an epidemic during the pandemic, is down by over 26 percent. Peer down to the local level, and the picture just keeps getting better. In Baltimore, which The Wire made synonymous with violent, drug-related crime, homicides fell to 199 last year, its best showing in over a decade. As of early May, the city had 45 murders, down another third from the same period last year. City emergency rooms that were once full of gunshot victims have gone quiet.How much lower could it go nationally? The record low homicide rate, at least since national records started being kept in 1960, is 4.45 per 100,000 in 2014. So far this year, according to Asher, murder is down in 25 of the 30 cities that reported the most murders in 2023. Asher argues that if the numbers hold, “a 10 percent or more decline in murder nationally in 2025 would roughly tie 2014 for the lowest murder rate ever recorded.”What’s behind the drop?In short: The pandemic led to a huge increase in violent crime, and as the pandemic waned, so did the wave.The closure of schools during the pandemic, especially in already higher-crime cities in the Northeast, meant far more young men — who are statistically more likely to be either perpetrators of violent crime or victims of it — on the streets. The closure of social services left fewer resources for them to draw on; and the sheer stress of a once-in-a-lifetime health catastrophe set everyone on edge. The murder of George Floyd in spring 2020 led to a collapse in community trust in policing, which in turn seemed to lead to less aggressive policing altogether. As the pandemic eased, though, those buffers came back, providing a natural brake on violent crime.But the government, from the national level down to cities, also took direct actions to stem the flood of violence. The White House under President Joe Biden poured hundreds of millions of dollars into community violence interruption programs, which aim to break the cycle of retribution that can lead to homicide. Baltimore’s Group Violence Reduction Strategy has brought together community groups and law enforcement to deter the people considered most likely to get involved in gun violence. And the erosion in police forces nationwide that occurred during the pandemic has largely stopped. The situation is far from perfect. Even though Floyd’s murder triggered a nationwide reckoning around police violence, recent data shows that police killings kept increasing, in part because fear of crime often stopped momentum around reforms. Here in New York, even as overall crime on the subways has fallen to historical lows, felony assaults on the trains have kept rising, fueling fears of lawlessness. Why can’t we believe it?As Memorial Day weekend marks the start of summer, the next few months will tell whether the pandemic was truly just a blip in the long-term reduction in violent crime. But what we can say is most people don’t seem to notice the positive trends. An October 2024 poll by Gallup found that 64 percent of Americans believed there was more crime nationwide than the year before, even though by that time in 2024, the post-pandemic crime drop was well under way. But such results aren’t surprising. One of the most reliable results in polling is that if you ask Americans whether crime is rising, they’ll say yes. Astonishingly, in 23 of 27 national surveys done by Gallup since 1993, Americans reported that they thought crime nationwide was rising — even though most of those surveys were done during the long crime decline. Crime is one of the best examples we have of bad news bias. By definition, a murder is an outlier event that grabs our attention, inevitably leading the nightly local news. Sometimes, as during the pandemic, that bias can match reality. But if we fail to adjust to what is actually happening around us — not just what we think is happening — it won’t just make us think our cities are more dangerous than they really are. It’ll sap energy for the reforms that can really make a difference. A version of this story originally appeared in the Good News newsletter. Sign up here!You’ve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you — join us.Swati SharmaVox Editor-in-ChiefSee More: #something #remarkable #happening #with #violent
    WWW.VOX.COM
    Something remarkable is happening with violent crime rates in the US
    The astounding drop in violent crime that began in the 1990s and extended through the mid-2010s is one of the most important — and most underappreciated — good news stories of recent memory. That made its reversal during the pandemic so worrying.In the first full year of the pandemic, the FBI tallied 22,134 murders nationwide, up from 16,669 in 2019 — an increase of roughly 34 percent, the sharpest one-year rise in modern crime record-keeping. In 2021, Philadelphia alone recorded a record 562 homicides, while Baltimore experienced a near-record 337 murders. Between 2019 and 2020, the average number of weekly emergency department visits for gunshots increased by 37 percent, and largely stayed high through the following year. By the 2024 election, for the first time in awhile, violent crime was a major political issue in the US. A Pew survey that year found that 58 percent of Americans believed crime should be a top priority for the president and Congress, up from 47 percent in 2021. And yet even as the presidential campaign was unfolding, the violent crime spike of the pandemic had already subsided — and crime rates have kept dropping. The FBI’s 2023 crime report found that murder was down nearly 12 percent year over year, and in 2024 it kept falling to roughly 16,700 murders, on par with pre-pandemic levels. The early numbers for 2025 are so promising that Jeff Asher, one of the best independent analysts on crime, recently asked in a piece whether this year could have the lowest murder rate in US history.All of which raises two questions: What’s driving a decrease in crime every bit as sharp as the pandemic-era increase? And why do so many of us find it so hard to believe?The crime wave crashesWe shouldn’t jump to conclusions about this year’s crime rates based on the early data, especially since we’re just now beginning the summer, when violent crime almost always rises. Crime data in the US is also patchy and slow — I can tell you how many soybeans the US raised in March, but I can’t tell you how many people have been murdered in the US this year. But what we can tell looks very good. The Real-Time Crime Index, an academic project that collects crime data from more than 380 police agencies covering nearly 100 million people, estimates there were 1,488 murders in the US this year through March, compared to an estimated 1,899 over the same months last year. That’s a decrease of nearly 22 percent. Violent crime overall is down by about 11 percent. Motor vehicle theft, which became an epidemic during the pandemic, is down by over 26 percent. Peer down to the local level, and the picture just keeps getting better. In Baltimore, which The Wire made synonymous with violent, drug-related crime, homicides fell to 199 last year, its best showing in over a decade. As of early May, the city had 45 murders, down another third from the same period last year. City emergency rooms that were once full of gunshot victims have gone quiet.How much lower could it go nationally? The record low homicide rate, at least since national records started being kept in 1960, is 4.45 per 100,000 in 2014. So far this year, according to Asher, murder is down in 25 of the 30 cities that reported the most murders in 2023. Asher argues that if the numbers hold, “a 10 percent or more decline in murder nationally in 2025 would roughly tie 2014 for the lowest murder rate ever recorded.”What’s behind the drop?In short: The pandemic led to a huge increase in violent crime, and as the pandemic waned, so did the wave.The closure of schools during the pandemic, especially in already higher-crime cities in the Northeast, meant far more young men — who are statistically more likely to be either perpetrators of violent crime or victims of it — on the streets. The closure of social services left fewer resources for them to draw on; and the sheer stress of a once-in-a-lifetime health catastrophe set everyone on edge. The murder of George Floyd in spring 2020 led to a collapse in community trust in policing, which in turn seemed to lead to less aggressive policing altogether. As the pandemic eased, though, those buffers came back, providing a natural brake on violent crime.But the government, from the national level down to cities, also took direct actions to stem the flood of violence. The White House under President Joe Biden poured hundreds of millions of dollars into community violence interruption programs, which aim to break the cycle of retribution that can lead to homicide. Baltimore’s Group Violence Reduction Strategy has brought together community groups and law enforcement to deter the people considered most likely to get involved in gun violence. And the erosion in police forces nationwide that occurred during the pandemic has largely stopped. The situation is far from perfect. Even though Floyd’s murder triggered a nationwide reckoning around police violence, recent data shows that police killings kept increasing, in part because fear of crime often stopped momentum around reforms. Here in New York, even as overall crime on the subways has fallen to historical lows, felony assaults on the trains have kept rising, fueling fears of lawlessness. Why can’t we believe it?As Memorial Day weekend marks the start of summer, the next few months will tell whether the pandemic was truly just a blip in the long-term reduction in violent crime. But what we can say is most people don’t seem to notice the positive trends. An October 2024 poll by Gallup found that 64 percent of Americans believed there was more crime nationwide than the year before, even though by that time in 2024, the post-pandemic crime drop was well under way. But such results aren’t surprising. One of the most reliable results in polling is that if you ask Americans whether crime is rising, they’ll say yes. Astonishingly, in 23 of 27 national surveys done by Gallup since 1993, Americans reported that they thought crime nationwide was rising — even though most of those surveys were done during the long crime decline. Crime is one of the best examples we have of bad news bias. By definition, a murder is an outlier event that grabs our attention, inevitably leading the nightly local news. Sometimes, as during the pandemic, that bias can match reality. But if we fail to adjust to what is actually happening around us — not just what we think is happening — it won’t just make us think our cities are more dangerous than they really are. It’ll sap energy for the reforms that can really make a difference. A version of this story originally appeared in the Good News newsletter. Sign up here!You’ve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you — join us.Swati SharmaVox Editor-in-ChiefSee More:
    0 Comments 0 Shares
  • This Deposit of 'Weird' Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan

    This Deposit of ‘Weird’ Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan
    A new study highlights the potential of amber fossils to capture evidence of powerful, prehistoric ocean waves

    A tsunami might have occured some 115 million years ago, near where deposits of Cretaceous amber were found in Japan.
    Wikimedia Commons under CC0 1.0

    Scientists in Japan have uncovered amber deposits that may hold elusive evidence of tsunamis that occurred between 114 million and 116 million years ago. Their findings were published in the journal Scientific Reports last week.
    The researchers stumbled upon the amber—fossilized tree resin—by chance while collecting rocks from a sand mine in Hokkaido, an island in northern Japan. The deposit would have been on the seafloor when it was formed during the Cretaceous period.
    “We found a weird form of amber,” says lead author Aya Kubota, a geologist at the National Institute of Advanced Industrial Science and Technology in Japan, to Katherine Kornei at Science News.
    The scientists analyzed the resin with a technique called fluorescence imaging, in which they snapped photos of the remains under ultraviolet light. This helped them see how the amber was separated by layers of dark sediment, creating shapes known as “flame structures.” The unusual pattern arises when soft amber deforms before completely hardening. “Generally, they will form when a denser layer gets deposited on top of a softer layer,” says Carrie Garrison-Laney, a geologist at Washington Sea Grant who was not involved in the study, to Science News.
    The researchers suggest this is evidence that the resin rapidly traveled from land while it was still malleable and solidified underwater. A tsunami could be what swept the trees from land to the ocean so quickly, the study authors write. If true, this could offer scientists a potential new technique for finding prehistoric tsunamis.
    “Identifying tsunamis is generally challenging,” Kubota explains to Live Science’s Olivia Ferrari in an email. Tsunami deposits are easily eroded by the environment, and they can also be hard to distinguish from deposits caused by other storms. But in this case, “by combining detailed field observations with the internal structures of amber, we were able to conclude that the most plausible cause was tsunamis.”

    Cretaceous amber depositsand fossilized driftwoodexamined in the study

    Kubota, Aya et al., Scientific Reports, 2025, under CC BY-NC-ND 4.0

    Other evidence also bolsters the researchers’ conclusion: A massive, nearby landslide offers a sign that an earthquake may have occurred around the same time the amber formed, and displaced mud and tree trunks were found in the same sediments—all signs of a violent tsunami. The trunks didn’t show any signs of erosion by shallow water-dwelling marine creatures, suggesting they were carried quickly out to sea.
    The vegetation found in the fossil deposit suggests multiple tsunamis occurred within the span of two million years, reports Hannah Richter for Science.
    But Garrison-Laney tells Science News that more evidence is needed to prove the amber is linked to a tsunami. She’s not sure the Cretaceous tree resin would have stayed soft once it hit the cold ocean water. “That seems like a stretch to me,” she tells the publication, adding that research on more of the area’s amber deposit will be needed to confirm the findings.
    With further study, scientists could use amber-rich sediments as a way to identify tsunamis throughout history. “Resin offers a rare, time-sensitive snapshot of depositional processes,” Kubota tells Live Science. Previously, scientists have found tiny crustaceans, prehistoric mollusks and even hell ants encased in the orangey resin, a window into worlds past.
    Now, “the emerging concept of ‘amber sedimentology’ holds exciting potential to provide unique insights into sedimentological processes,” Kubota adds to Live Science.

    Get the latest stories in your inbox every weekday.

    More about:
    Fossils
    Japan
    New Research
    Oceans
    Tsunami
    #this #deposit #039weird039 #cretaceous #amber
    This Deposit of 'Weird' Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan
    This Deposit of ‘Weird’ Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan A new study highlights the potential of amber fossils to capture evidence of powerful, prehistoric ocean waves A tsunami might have occured some 115 million years ago, near where deposits of Cretaceous amber were found in Japan. Wikimedia Commons under CC0 1.0 Scientists in Japan have uncovered amber deposits that may hold elusive evidence of tsunamis that occurred between 114 million and 116 million years ago. Their findings were published in the journal Scientific Reports last week. The researchers stumbled upon the amber—fossilized tree resin—by chance while collecting rocks from a sand mine in Hokkaido, an island in northern Japan. The deposit would have been on the seafloor when it was formed during the Cretaceous period. “We found a weird form of amber,” says lead author Aya Kubota, a geologist at the National Institute of Advanced Industrial Science and Technology in Japan, to Katherine Kornei at Science News. The scientists analyzed the resin with a technique called fluorescence imaging, in which they snapped photos of the remains under ultraviolet light. This helped them see how the amber was separated by layers of dark sediment, creating shapes known as “flame structures.” The unusual pattern arises when soft amber deforms before completely hardening. “Generally, they will form when a denser layer gets deposited on top of a softer layer,” says Carrie Garrison-Laney, a geologist at Washington Sea Grant who was not involved in the study, to Science News. The researchers suggest this is evidence that the resin rapidly traveled from land while it was still malleable and solidified underwater. A tsunami could be what swept the trees from land to the ocean so quickly, the study authors write. If true, this could offer scientists a potential new technique for finding prehistoric tsunamis. “Identifying tsunamis is generally challenging,” Kubota explains to Live Science’s Olivia Ferrari in an email. Tsunami deposits are easily eroded by the environment, and they can also be hard to distinguish from deposits caused by other storms. But in this case, “by combining detailed field observations with the internal structures of amber, we were able to conclude that the most plausible cause was tsunamis.” Cretaceous amber depositsand fossilized driftwoodexamined in the study Kubota, Aya et al., Scientific Reports, 2025, under CC BY-NC-ND 4.0 Other evidence also bolsters the researchers’ conclusion: A massive, nearby landslide offers a sign that an earthquake may have occurred around the same time the amber formed, and displaced mud and tree trunks were found in the same sediments—all signs of a violent tsunami. The trunks didn’t show any signs of erosion by shallow water-dwelling marine creatures, suggesting they were carried quickly out to sea. The vegetation found in the fossil deposit suggests multiple tsunamis occurred within the span of two million years, reports Hannah Richter for Science. But Garrison-Laney tells Science News that more evidence is needed to prove the amber is linked to a tsunami. She’s not sure the Cretaceous tree resin would have stayed soft once it hit the cold ocean water. “That seems like a stretch to me,” she tells the publication, adding that research on more of the area’s amber deposit will be needed to confirm the findings. With further study, scientists could use amber-rich sediments as a way to identify tsunamis throughout history. “Resin offers a rare, time-sensitive snapshot of depositional processes,” Kubota tells Live Science. Previously, scientists have found tiny crustaceans, prehistoric mollusks and even hell ants encased in the orangey resin, a window into worlds past. Now, “the emerging concept of ‘amber sedimentology’ holds exciting potential to provide unique insights into sedimentological processes,” Kubota adds to Live Science. Get the latest stories in your inbox every weekday. More about: Fossils Japan New Research Oceans Tsunami #this #deposit #039weird039 #cretaceous #amber
    WWW.SMITHSONIANMAG.COM
    This Deposit of 'Weird' Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan
    This Deposit of ‘Weird’ Cretaceous Amber Could Reveal Hints to Long-Forgotten Tsunamis in Japan A new study highlights the potential of amber fossils to capture evidence of powerful, prehistoric ocean waves A tsunami might have occured some 115 million years ago, near where deposits of Cretaceous amber were found in Japan. Wikimedia Commons under CC0 1.0 Scientists in Japan have uncovered amber deposits that may hold elusive evidence of tsunamis that occurred between 114 million and 116 million years ago. Their findings were published in the journal Scientific Reports last week. The researchers stumbled upon the amber—fossilized tree resin—by chance while collecting rocks from a sand mine in Hokkaido, an island in northern Japan. The deposit would have been on the seafloor when it was formed during the Cretaceous period. “We found a weird form of amber,” says lead author Aya Kubota, a geologist at the National Institute of Advanced Industrial Science and Technology in Japan, to Katherine Kornei at Science News. The scientists analyzed the resin with a technique called fluorescence imaging, in which they snapped photos of the remains under ultraviolet light. This helped them see how the amber was separated by layers of dark sediment, creating shapes known as “flame structures.” The unusual pattern arises when soft amber deforms before completely hardening. “Generally, they will form when a denser layer gets deposited on top of a softer layer,” says Carrie Garrison-Laney, a geologist at Washington Sea Grant who was not involved in the study, to Science News. The researchers suggest this is evidence that the resin rapidly traveled from land while it was still malleable and solidified underwater. A tsunami could be what swept the trees from land to the ocean so quickly, the study authors write. If true, this could offer scientists a potential new technique for finding prehistoric tsunamis. “Identifying tsunamis is generally challenging,” Kubota explains to Live Science’s Olivia Ferrari in an email. Tsunami deposits are easily eroded by the environment, and they can also be hard to distinguish from deposits caused by other storms. But in this case, “by combining detailed field observations with the internal structures of amber, we were able to conclude that the most plausible cause was tsunamis.” Cretaceous amber deposits (a, b, d, e) and fossilized driftwood (c) examined in the study Kubota, Aya et al., Scientific Reports, 2025, under CC BY-NC-ND 4.0 Other evidence also bolsters the researchers’ conclusion: A massive, nearby landslide offers a sign that an earthquake may have occurred around the same time the amber formed, and displaced mud and tree trunks were found in the same sediments—all signs of a violent tsunami. The trunks didn’t show any signs of erosion by shallow water-dwelling marine creatures, suggesting they were carried quickly out to sea. The vegetation found in the fossil deposit suggests multiple tsunamis occurred within the span of two million years, reports Hannah Richter for Science. But Garrison-Laney tells Science News that more evidence is needed to prove the amber is linked to a tsunami. She’s not sure the Cretaceous tree resin would have stayed soft once it hit the cold ocean water. “That seems like a stretch to me,” she tells the publication, adding that research on more of the area’s amber deposit will be needed to confirm the findings. With further study, scientists could use amber-rich sediments as a way to identify tsunamis throughout history. “Resin offers a rare, time-sensitive snapshot of depositional processes,” Kubota tells Live Science. Previously, scientists have found tiny crustaceans, prehistoric mollusks and even hell ants encased in the orangey resin, a window into worlds past. Now, “the emerging concept of ‘amber sedimentology’ holds exciting potential to provide unique insights into sedimentological processes,” Kubota adds to Live Science. Get the latest stories in your inbox every weekday. More about: Fossils Japan New Research Oceans Tsunami
    0 Comments 0 Shares
  • Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?

    Author

    I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEMvalues, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range, but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup tablewould make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas.

    If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate inand use that to sample the texture. You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlopeevaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type. Slope-based texturingis common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:/
    #can #terrainbased #color #grading #really
    Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?
    Author I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEMvalues, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range, but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup tablewould make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas. If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate inand use that to sample the texture. You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlopeevaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type. Slope-based texturingis common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:/ #can #terrainbased #color #grading #really
    Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?
    Author I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEM (digital elevation model) values, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range (e.g., green under 500m, brown 500–1500m, grey and white above that), but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup table (LUT) would make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas. If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate in [0,1] and use that to sample the texture (which should use linear interpolation). You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlope (gradient magnitude) evaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type (rock, dirt, grass, etc.). Slope-based texturing (e.g. slope 0-0.2 is grass, 0.2-0.4 is dirt, >0.4 is rock) is common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:https://gamedev.net/blogs/entry/2284060-rock-layers-for-real-time-erosion-simulation/
    0 Comments 0 Shares
  • AI vs. copyright

    Last year, I noted that OpenAI’s view on copyright is that it’s fine and dandy to copy, paste, and steal people’s work. OpenAI is far from alone. Anthropic, Google, and Meta all trot out the same tired old arguments: AI must be free to use copyrighted material under the legal doctrine of fair use so that they can deliver top-notch AI programs.

    Further, they all claim that if the US government doesn’t let them strip-mine the work of writers, artists, and musicians, someone else will do it instead, and won’t that be awful?

    Of course, the AI companies could just, you know, pay people for access to their work instead of stealing it under the cloak of improving AI, but that might slow down their leaders’ frantic dash to catch up with Elon Musk and become the world’s first trillionaire.

    Horrors!

    In the meantime, the median pay for a full-time writer, according to the Authors Guild, is just over a year. Artists? annually. And musicians? Those numbers are all on the high side, by the way. They’re for full-time professionals, and there are far more part-timers in these fields than people who make, or try to make, a living from being a creative.

    What? You think we’re rich? Please. For every Stephen King, Jeff Koons, or Taylor Swift, there are a thousand people whose names you’ll never know. And, as hard as these folks have it now, AI firms are determined that creative professionals will never see a penny from their work being used as the ore from which the companies will refine billions.

    Some people are standing up for their rights. Publishing companies such as the New York Times and Universal Music, as well as nonprofit organizations like the Independent Society of Musicians, are all fighting for creatives to be paid. Publishers, in particular, are not always aligned with writers and musicians, but at least they’re trying to force the AI giants to pay something.

    At least part of the US government is also standing up for copyright rights. “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the US Copyright Office declared in a recent report.

    Personally, I’d use a lot stronger language, but it’s something.

    Of course, President Donald Trump immediately fired the head of the Copyright Office. Her days were probably numbered anyway. Earlier, the office had declared that copyright should only be granted to AI-assisted works based on the “centrality of human creativity.”

    “Wait, wait,” I hear you saying, “why would that tick off Trump’s AI allies?” Oh, you see, while the AI giants want to use your work for free; they want their “works” protected.

    Remember the Chinese AI company DeepSeek, which scared the pants off OpenAI for a while? OpenAI claimed DeepSeek had “inappropriately distilled” its models. “We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here,” the company said.

    In short, OpenAI wants to have it both ways. The company wants to be free to Hoover down your work, but you can’t take its “creations.”

    OpenAI recently spelled out its preferred policy in a fawning letter to Trump’s Office of Science and Technology. In it, OpenAI says, “we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.”

    For laws and bureaucracy, read copyright and the right of people to be paid for their intellectual work.

    As with so many things in US government these days, we won’t be able to depend on government agencies to protect writers, artists, and musicians, with Trump firing any and all who disagree with him. Instead, we must rely on court rulings.

    In some cases, such as Thomson Reuters v. ROSS Intelligence, the actual legal definition of copyright and fair use has found that wholesale copying of copyrighted material for AI training can constitute infringement, especially when it harms the market for the original works and is not sufficiently transformative. Hopefully, other lawsuits against companies like Meta, OpenAI, and Anthropic will show that their AI outputs are unlawfully competing with original works.

    As lawsuits proceed and new regulations are debated, the relationship between AI and copyright law will continue to evolve. If it comes out the right way, AI can still be useful and profitable, even as the AI companies do their damnedest to avoid paying anyone for the work their large language modelsrun on.

    If the courts can’t hold the wall for true creativity, we may wind up drowning in pale imitations of it, with each successive wave farther from the real thing.

    This potential watering down of creativity is a lot like the erosion of independent thinking that science fiction writer Neal Stephenson noted recently: “I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.”
    #copyright
    AI vs. copyright
    Last year, I noted that OpenAI’s view on copyright is that it’s fine and dandy to copy, paste, and steal people’s work. OpenAI is far from alone. Anthropic, Google, and Meta all trot out the same tired old arguments: AI must be free to use copyrighted material under the legal doctrine of fair use so that they can deliver top-notch AI programs. Further, they all claim that if the US government doesn’t let them strip-mine the work of writers, artists, and musicians, someone else will do it instead, and won’t that be awful? Of course, the AI companies could just, you know, pay people for access to their work instead of stealing it under the cloak of improving AI, but that might slow down their leaders’ frantic dash to catch up with Elon Musk and become the world’s first trillionaire. Horrors! In the meantime, the median pay for a full-time writer, according to the Authors Guild, is just over a year. Artists? annually. And musicians? Those numbers are all on the high side, by the way. They’re for full-time professionals, and there are far more part-timers in these fields than people who make, or try to make, a living from being a creative. What? You think we’re rich? Please. For every Stephen King, Jeff Koons, or Taylor Swift, there are a thousand people whose names you’ll never know. And, as hard as these folks have it now, AI firms are determined that creative professionals will never see a penny from their work being used as the ore from which the companies will refine billions. Some people are standing up for their rights. Publishing companies such as the New York Times and Universal Music, as well as nonprofit organizations like the Independent Society of Musicians, are all fighting for creatives to be paid. Publishers, in particular, are not always aligned with writers and musicians, but at least they’re trying to force the AI giants to pay something. At least part of the US government is also standing up for copyright rights. “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the US Copyright Office declared in a recent report. Personally, I’d use a lot stronger language, but it’s something. Of course, President Donald Trump immediately fired the head of the Copyright Office. Her days were probably numbered anyway. Earlier, the office had declared that copyright should only be granted to AI-assisted works based on the “centrality of human creativity.” “Wait, wait,” I hear you saying, “why would that tick off Trump’s AI allies?” Oh, you see, while the AI giants want to use your work for free; they want their “works” protected. Remember the Chinese AI company DeepSeek, which scared the pants off OpenAI for a while? OpenAI claimed DeepSeek had “inappropriately distilled” its models. “We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here,” the company said. In short, OpenAI wants to have it both ways. The company wants to be free to Hoover down your work, but you can’t take its “creations.” OpenAI recently spelled out its preferred policy in a fawning letter to Trump’s Office of Science and Technology. In it, OpenAI says, “we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.” For laws and bureaucracy, read copyright and the right of people to be paid for their intellectual work. As with so many things in US government these days, we won’t be able to depend on government agencies to protect writers, artists, and musicians, with Trump firing any and all who disagree with him. Instead, we must rely on court rulings. In some cases, such as Thomson Reuters v. ROSS Intelligence, the actual legal definition of copyright and fair use has found that wholesale copying of copyrighted material for AI training can constitute infringement, especially when it harms the market for the original works and is not sufficiently transformative. Hopefully, other lawsuits against companies like Meta, OpenAI, and Anthropic will show that their AI outputs are unlawfully competing with original works. As lawsuits proceed and new regulations are debated, the relationship between AI and copyright law will continue to evolve. If it comes out the right way, AI can still be useful and profitable, even as the AI companies do their damnedest to avoid paying anyone for the work their large language modelsrun on. If the courts can’t hold the wall for true creativity, we may wind up drowning in pale imitations of it, with each successive wave farther from the real thing. This potential watering down of creativity is a lot like the erosion of independent thinking that science fiction writer Neal Stephenson noted recently: “I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.” #copyright
    WWW.COMPUTERWORLD.COM
    AI vs. copyright
    Last year, I noted that OpenAI’s view on copyright is that it’s fine and dandy to copy, paste, and steal people’s work. OpenAI is far from alone. Anthropic, Google, and Meta all trot out the same tired old arguments: AI must be free to use copyrighted material under the legal doctrine of fair use so that they can deliver top-notch AI programs. Further, they all claim that if the US government doesn’t let them strip-mine the work of writers, artists, and musicians, someone else will do it instead, and won’t that be awful? Of course, the AI companies could just, you know, pay people for access to their work instead of stealing it under the cloak of improving AI, but that might slow down their leaders’ frantic dash to catch up with Elon Musk and become the world’s first trillionaire. Horrors! In the meantime, the median pay for a full-time writer, according to the Authors Guild, is just over $20,000 a year. Artists? $54,000 annually. And musicians? $50,000. Those numbers are all on the high side, by the way. They’re for full-time professionals, and there are far more part-timers in these fields than people who make, or try to make, a living from being a creative. What? You think we’re rich? Please. For every Stephen King, Jeff Koons, or Taylor Swift, there are a thousand people whose names you’ll never know. And, as hard as these folks have it now, AI firms are determined that creative professionals will never see a penny from their work being used as the ore from which the companies will refine billions. Some people are standing up for their rights. Publishing companies such as the New York Times and Universal Music, as well as nonprofit organizations like the Independent Society of Musicians, are all fighting for creatives to be paid. Publishers, in particular, are not always aligned with writers and musicians, but at least they’re trying to force the AI giants to pay something. At least part of the US government is also standing up for copyright rights. “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the US Copyright Office declared in a recent report. Personally, I’d use a lot stronger language, but it’s something. Of course, President Donald Trump immediately fired the head of the Copyright Office. Her days were probably numbered anyway. Earlier, the office had declared that copyright should only be granted to AI-assisted works based on the “centrality of human creativity.” “Wait, wait,” I hear you saying, “why would that tick off Trump’s AI allies?” Oh, you see, while the AI giants want to use your work for free; they want their “works” protected. Remember the Chinese AI company DeepSeek, which scared the pants off OpenAI for a while? OpenAI claimed DeepSeek had “inappropriately distilled” its models. “We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here,” the company said. In short, OpenAI wants to have it both ways. The company wants to be free to Hoover down your work, but you can’t take its “creations.” OpenAI recently spelled out its preferred policy in a fawning letter to Trump’s Office of Science and Technology. In it, OpenAI says, “we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI [artificial general intelligence], protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.” For laws and bureaucracy, read copyright and the right of people to be paid for their intellectual work. As with so many things in US government these days, we won’t be able to depend on government agencies to protect writers, artists, and musicians, with Trump firing any and all who disagree with him. Instead, we must rely on court rulings. In some cases, such as Thomson Reuters v. ROSS Intelligence, the actual legal definition of copyright and fair use has found that wholesale copying of copyrighted material for AI training can constitute infringement, especially when it harms the market for the original works and is not sufficiently transformative. Hopefully, other lawsuits against companies like Meta, OpenAI, and Anthropic will show that their AI outputs are unlawfully competing with original works. As lawsuits proceed and new regulations are debated, the relationship between AI and copyright law will continue to evolve. If it comes out the right way, AI can still be useful and profitable, even as the AI companies do their damnedest to avoid paying anyone for the work their large language models (LLMs) run on. If the courts can’t hold the wall for true creativity, we may wind up drowning in pale imitations of it, with each successive wave farther from the real thing. This potential watering down of creativity is a lot like the erosion of independent thinking that science fiction writer Neal Stephenson noted recently: “I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.”
    0 Comments 0 Shares
More Results
CGShares https://cgshares.com