• How much does your road weigh?

    The ways roads are used, with ever larger and heavier vehicles, have dramatic consequences on the environment – and electric cars are not the answer
    Today, there is an average of 37 tonnes of road per inhabitant of the planet. The weight of the road network alone accounts for a third of all construction worldwide, and has grown exponentially in the 20th century. There is 10 times more bitumen, in mass, than there are living animals. Yet growth in the mass of roads does not automatically correspond to population growth, or translate into increased length of road networks. In wealthier countries, the number of metres of road per inhabitant has actually fallen over the last century. In the United States, for instance, between 1905 and 2015 the length of the network increased by a factor of 1.75 and the population by a factor of 3.8, compared with 21 for the mass of roads. Roads have become wider and, above all, much thicker. To understand the evolution of these parameters, and their environmental impact, it is helpful to trace the different stages in the life of the motorway. 
    Until the early 20th century, roads were used for various modes of transport, including horses, bicycles, pedestrians and trams; as a result of the construction of railways, road traffic even declined in some European countries in the 19th century. The main novelty brought by the motorway was that they would be reserved for motorised traffic. In several languages, the word itself – autostrada, autobahn, autoroute or motorway – speaks of this exclusivity. 
    Roman roads varied from simple corduroy roads, made by placing logs perpendicular to the direction of the road over a low or swampy area, to paved roads, as this engraving from Jean Rondelet’s 19th‑century Traité Théorique et Pratique de l’Art de Bâtir shows. Using deep roadbeds of tamped rubble as an underlying layer to ensure that they kept dry, major roads were often stone-paved, metalled, cambered for drainage and flanked by footpaths, bridleways and drainage ditches

    Like any major piece of infrastructure, motorways became the subject of ideological discourse, long before any shovel hit the ground; politicians underlined their role in the service of the nation, how they would contribute to progress, development, the economy, modernity and even civilisation. The inauguration ceremony for the construction of the first autostrada took place in March 1923, presided over by Italy’s prime minister Benito Mussolini. The second major motorway programme was announced by the Nazi government in 1933, with a national network planned to be around 7,000 kilometres long. In his 2017 book Driving Modernity: Technology, Experts, Politics, and Fascist Motorways, 1922–1943, historian Massimo Moraglio shows how both programmes were used as propaganda tools by the regimes, most notably at the international road congresses in Milan in 1926 and Munich in 1934. In the European postwar era, the notion of the ‘civilising’ effect of roads persevered. In 1962, Valéry Giscard d’Estaing, then‑secretary of state for finances and later president of France, argued that expanded motorways would bring ‘progress, activity and life’.
    This discourse soon butted up against the realities of how motorways affected individuals and communities. In his 2011 book Fighting Traffic: The Dawn of the Motor Age in the American City, Peter D Norton explores the history of resistance to the imposition of motorised traffic in North American cities. Until the 1920s, there was a perception that cars were dangerous newcomers, and that other street and road uses – especially walking – were more legitimate. Cars were associated with speed and danger; restrictions on motorists, especially speed limits, were routine. 
    Built between 1962 and 1970, the Westway was London’s first urban motorway, elevated above the city to use less land. Construction workers are seen stressing the longitudinal soffit cables inside the box section of the deck units to achieve the bearing capacity necessary to carry the weight of traffic
    Credit: Heritage Image Partnership Ltd / Alamy
    To gain domination over cities, motor vehicles had to win priority over other street uses. Rather than restricting the flow of vehicles to minimise the risk of road accidents, a specific infrastructure was dedicated to them: both inner‑city roads and motorways. Cutting through the landscape, the motorway had, by definition, to be inaccessible by any other means of transport than motorised vehicle. To guarantee the fluidity of traffic, the construction of imposing bridges, tunnels and interchanges is necessary, particularly at junctions with other roads, railways or canals. This prioritisation of one type of user inevitably impacts journeys for others; as space is fragmented, short journeys are lengthened for those trying to navigate space by foot or bicycle. 
    Enabling cars to drive at around 110–140km/h on motorways, as modern motorways do, directly impacts their design, with major environmental effects: the gradient has to be gentle, the curves longand the lanes wide, to allow vehicles to overtake each other safely. As much terrain around the world is not naturally suited to these requirements, the earthworks are considerable: in France, the construction of a metre of highway requires moving some 100m3 of earth, and when the soil is soft, full of clay or peat, it is made firmer with hydraulic lime and cement before the highway’s first sub‑layers are laid. This material cost reinforces the criticisms levelled in the 1960s, by the likes of Jane Jacobs and Lewis Mumford, at urban planning that prioritised the personal motor vehicle.
    When roads are widened to accommodate more traffic, buildings are sliced and demolished, as happened in Dhaka’s Bhasantek Road in 2021
    Credit: Dhaka Tribune
    Once built, the motorway is never inert. Motorway projects today generally anticipate future expansion, and include a large median strip of 12m between the lanes, with a view to adding new ones. Increases in speed and vehicle sizes have also translated into wider lanes, from 2.5m in 1945 to 3.5m today. The average contemporary motorway footprint is therefore 100 square metres per linear metre. Indeed, although the construction of a road is supposed to reduce congestion, it also generates new traffic and, therefore, new congestion. This is the principle of ‘induced traffic’: the provision of extra road capacity results in a greater volume of traffic.
    The Katy Freeway in Texas famously illustrates this dynamic. Built as a regular six‑lane highway in the 1960s, it was called the second worst bottleneck in the nation by 2004, wasting 25 million hours a year of commuter time. In 2011, the state of Texas invested USbillion to fix this problem, widening the road to a staggering total of 26 lanes. By 2014, the morning and afternoon traffic had both increased again. The vicious circle based on the induced traffic has been empirically demonstrated in most countries: traffic has continued to increase and congestion remains unresolved, leading to ever-increasing emissions. In the EU, transport is the only sector where greenhouse gas emissions have increased in the past three decades, rising 33.5 per cent between 1990 and 2019. Transport accounts for around a fifth of global CO₂ emissions today, with three quarters of this figure linked to road transport.
    Houston’s Katy Freeway is one of the world’s widest motorways, with 26 lanes. Its last expansion, in 2008, was initially hailed as a success, but within five years, peak travel times were longer than before the expansion – a direct illustration of the principle of induced traffic
    Credit: Smiley N Pool / Houston Chronicle / Getty
    Like other large transport infrastructures such as ports and airports, motorways are designed for the largest and heaviest vehicles. Engineers, road administrations and politicians have known since the 1950s that one truck represents millions of cars: the impact of a vehicle on the roadway is exponential to its weight – an online ‘road damage calculator’ allows you to compare the damage done by different types of vehicles to the road. Over the years, heavier and heavier trucks have been authorised to operate on roads: from 8‑tonne trucks in 1945 to 44 tonnes nowadays. The European Parliament adopted a revised directive on 12 March 2024 authorising mega‑trucks to travel on European roads; they can measure up to 25 metres and weigh up to 60 tonnes, compared with the previous limits of 18.75 metres and 44 tonnes. This is a political and economic choice with considerable material effects: thickness, rigidity of sub‑bases and consolidation of soil and subsoil with lime and cement. Altogether, motorways are 10 times thicker than large roads from the late 19th century. In France, it takes an average of 30 tonnes of sand and aggregate to build one linear metre of motorway, 100 times more than cement and bitumen. 
    The material history of road networks is a history of quarrying and environmental damage. The traces of roads can also be seen in rivers emptied of their sediment, the notches of quarries in the hills and the furrows of dredgers extracting sand from the seabed. This material extraction, arguably the most significant in human history, has dramatic ecological consequences for rivers, groundwater tables, the rise of sea levels and saltwater in farmlands, as well as biodiversity. As sand is ubiquitous and very cheap, the history of roads is also the history of a local extractivism and environmental conflicts around the world. 
    Shoving and rutting is the bulging and rippling of the pavement surface. Once built, roads require extensive maintenance – the heavier the vehicles, the quicker the damage. From pothole repair to the full resurfacing of a road, maintenance contributes to keeping road users safe
    Credit: Yakov Oskanov / Alamy
    Once roads are built and extended, they need to be maintained to support the circulation of lorries and, by extension, commodities. This stage is becoming increasingly important as rail freight, which used to be important in countries such as France and the UK, is declining, accounting for no more than 10 per cent of the transport of commodities. Engineers might judge that a motorway is destined to last 20 years or so, but this prognosis will be significantly reduced with heavy traffic. The same applies to the thousands of motorway bridges: in the UK, nearly half of the 9,000 highway bridges are in poor condition; in France, 7 per cent of the 12,000 bridges are in danger of collapsing, as did Genoa’s Morandi bridge in 2018. If only light vehicles drove on it, this infrastructure would last much longer.
    This puts into perspective governments’ insistence on ‘greening’ the transport sector by targeting CO2 emissions alone, typically by promoting the use of electric vehicles. Public policies prioritising EVs do nothing to change the mass of roads or the issue of their maintenance – even if lorries were to run on clean air, massive quarrying would still be necessary. A similar argument plays out with regard to canals and ports, which have been constantly widened and deepened for decades to accommodate ever-larger oil tankers or container ships. The simple operation of these infrastructures, dimensioned for the circulation of commodities and not humans, requires permanent dredging of large volumes. The environmental problem of large transport infrastructure goes beyond the type of energy used: it is, at its root, free and globalised trade.
    ‘The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing’
    As both a material and ideological object, the motorway fixes certain political choices in the landscape. Millions of kilometres of road continue to be asphalted, widened and thickened around the world to favour cars and lorries. In France, more than 80 per cent of today’s sand and aggregate extraction is used for civil engineering works – the rest goes to buildings. Even if no more buildings, roads or other infrastructures were to be built, phenomenal quantities of sand and aggregates would still need to be extracted in order to maintain existing road networks. The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing, adding new structures such as wildlife crossings, more maintaining. 
    Rising traffic levels are always deemed positive by governments for a country’s economy and development. As Christopher Wells shows in his 2014 book Car Country: An Environmental History, car use becomes necessary in an environment where everything has been planned for the car, from the location of public services and supermarkets to residential and office areas. Similarly, when an entire economy is based on globalised trade and just‑in‑time logistics, the lorry and the container ship become vital. 
    The final stage in the life of a piece of motorway infrastructure is dismantling. Like the other stages, this one is not a natural outcome but the fruit of political choices – which should be democratic – regarding how we wish to use existing roads. Dismantling, which is essential if we are to put an end to the global extractivism of sand and aggregates, does not mean destruction: if bicycles and pedestrians were to use them instead, maintenance would be minimal. This final stage requires a paradigm shift away from the eternal adaptation to increasing traffic. Replacing cars and lorries with public transport and rail freight would be a first step. But above all, a different political and spatial organisation of economic activities is necessary, and ultimately, an end to globalised, just-in-time trade and logistics.
    In 1978, a row of cars parked at a shopping centre in Connecticut was buried under a thick layer of gooey asphalt. The Ghost Parking Lot, one of the first projects by James Wines’ practice SITE, became a playground for skateboarders until it was removed in 2003. Images of this lumpy landscape serve as allegories of the damage caused by reliance on the automobile
    Credit: Project by SITE

    Lead image: Some road damage is beyond repair, as when a landslide caused a large chunk of the Gothenburg–Oslo motorway to collapse in 2023. Such dramatic events remind us of both the fragility of these seemingly robust infrastructures, and the damage that extensive construction does to the planet. Credit: Hanna Brunlöf Windell / TT / Shutterstock

    2025-06-03
    Reuben J Brown

    Share
    #how #much #does #your #road
    How much does your road weigh?
    The ways roads are used, with ever larger and heavier vehicles, have dramatic consequences on the environment – and electric cars are not the answer Today, there is an average of 37 tonnes of road per inhabitant of the planet. The weight of the road network alone accounts for a third of all construction worldwide, and has grown exponentially in the 20th century. There is 10 times more bitumen, in mass, than there are living animals. Yet growth in the mass of roads does not automatically correspond to population growth, or translate into increased length of road networks. In wealthier countries, the number of metres of road per inhabitant has actually fallen over the last century. In the United States, for instance, between 1905 and 2015 the length of the network increased by a factor of 1.75 and the population by a factor of 3.8, compared with 21 for the mass of roads. Roads have become wider and, above all, much thicker. To understand the evolution of these parameters, and their environmental impact, it is helpful to trace the different stages in the life of the motorway.  Until the early 20th century, roads were used for various modes of transport, including horses, bicycles, pedestrians and trams; as a result of the construction of railways, road traffic even declined in some European countries in the 19th century. The main novelty brought by the motorway was that they would be reserved for motorised traffic. In several languages, the word itself – autostrada, autobahn, autoroute or motorway – speaks of this exclusivity.  Roman roads varied from simple corduroy roads, made by placing logs perpendicular to the direction of the road over a low or swampy area, to paved roads, as this engraving from Jean Rondelet’s 19th‑century Traité Théorique et Pratique de l’Art de Bâtir shows. Using deep roadbeds of tamped rubble as an underlying layer to ensure that they kept dry, major roads were often stone-paved, metalled, cambered for drainage and flanked by footpaths, bridleways and drainage ditches Like any major piece of infrastructure, motorways became the subject of ideological discourse, long before any shovel hit the ground; politicians underlined their role in the service of the nation, how they would contribute to progress, development, the economy, modernity and even civilisation. The inauguration ceremony for the construction of the first autostrada took place in March 1923, presided over by Italy’s prime minister Benito Mussolini. The second major motorway programme was announced by the Nazi government in 1933, with a national network planned to be around 7,000 kilometres long. In his 2017 book Driving Modernity: Technology, Experts, Politics, and Fascist Motorways, 1922–1943, historian Massimo Moraglio shows how both programmes were used as propaganda tools by the regimes, most notably at the international road congresses in Milan in 1926 and Munich in 1934. In the European postwar era, the notion of the ‘civilising’ effect of roads persevered. In 1962, Valéry Giscard d’Estaing, then‑secretary of state for finances and later president of France, argued that expanded motorways would bring ‘progress, activity and life’. This discourse soon butted up against the realities of how motorways affected individuals and communities. In his 2011 book Fighting Traffic: The Dawn of the Motor Age in the American City, Peter D Norton explores the history of resistance to the imposition of motorised traffic in North American cities. Until the 1920s, there was a perception that cars were dangerous newcomers, and that other street and road uses – especially walking – were more legitimate. Cars were associated with speed and danger; restrictions on motorists, especially speed limits, were routine.  Built between 1962 and 1970, the Westway was London’s first urban motorway, elevated above the city to use less land. Construction workers are seen stressing the longitudinal soffit cables inside the box section of the deck units to achieve the bearing capacity necessary to carry the weight of traffic Credit: Heritage Image Partnership Ltd / Alamy To gain domination over cities, motor vehicles had to win priority over other street uses. Rather than restricting the flow of vehicles to minimise the risk of road accidents, a specific infrastructure was dedicated to them: both inner‑city roads and motorways. Cutting through the landscape, the motorway had, by definition, to be inaccessible by any other means of transport than motorised vehicle. To guarantee the fluidity of traffic, the construction of imposing bridges, tunnels and interchanges is necessary, particularly at junctions with other roads, railways or canals. This prioritisation of one type of user inevitably impacts journeys for others; as space is fragmented, short journeys are lengthened for those trying to navigate space by foot or bicycle.  Enabling cars to drive at around 110–140km/h on motorways, as modern motorways do, directly impacts their design, with major environmental effects: the gradient has to be gentle, the curves longand the lanes wide, to allow vehicles to overtake each other safely. As much terrain around the world is not naturally suited to these requirements, the earthworks are considerable: in France, the construction of a metre of highway requires moving some 100m3 of earth, and when the soil is soft, full of clay or peat, it is made firmer with hydraulic lime and cement before the highway’s first sub‑layers are laid. This material cost reinforces the criticisms levelled in the 1960s, by the likes of Jane Jacobs and Lewis Mumford, at urban planning that prioritised the personal motor vehicle. When roads are widened to accommodate more traffic, buildings are sliced and demolished, as happened in Dhaka’s Bhasantek Road in 2021 Credit: Dhaka Tribune Once built, the motorway is never inert. Motorway projects today generally anticipate future expansion, and include a large median strip of 12m between the lanes, with a view to adding new ones. Increases in speed and vehicle sizes have also translated into wider lanes, from 2.5m in 1945 to 3.5m today. The average contemporary motorway footprint is therefore 100 square metres per linear metre. Indeed, although the construction of a road is supposed to reduce congestion, it also generates new traffic and, therefore, new congestion. This is the principle of ‘induced traffic’: the provision of extra road capacity results in a greater volume of traffic. The Katy Freeway in Texas famously illustrates this dynamic. Built as a regular six‑lane highway in the 1960s, it was called the second worst bottleneck in the nation by 2004, wasting 25 million hours a year of commuter time. In 2011, the state of Texas invested USbillion to fix this problem, widening the road to a staggering total of 26 lanes. By 2014, the morning and afternoon traffic had both increased again. The vicious circle based on the induced traffic has been empirically demonstrated in most countries: traffic has continued to increase and congestion remains unresolved, leading to ever-increasing emissions. In the EU, transport is the only sector where greenhouse gas emissions have increased in the past three decades, rising 33.5 per cent between 1990 and 2019. Transport accounts for around a fifth of global CO₂ emissions today, with three quarters of this figure linked to road transport. Houston’s Katy Freeway is one of the world’s widest motorways, with 26 lanes. Its last expansion, in 2008, was initially hailed as a success, but within five years, peak travel times were longer than before the expansion – a direct illustration of the principle of induced traffic Credit: Smiley N Pool / Houston Chronicle / Getty Like other large transport infrastructures such as ports and airports, motorways are designed for the largest and heaviest vehicles. Engineers, road administrations and politicians have known since the 1950s that one truck represents millions of cars: the impact of a vehicle on the roadway is exponential to its weight – an online ‘road damage calculator’ allows you to compare the damage done by different types of vehicles to the road. Over the years, heavier and heavier trucks have been authorised to operate on roads: from 8‑tonne trucks in 1945 to 44 tonnes nowadays. The European Parliament adopted a revised directive on 12 March 2024 authorising mega‑trucks to travel on European roads; they can measure up to 25 metres and weigh up to 60 tonnes, compared with the previous limits of 18.75 metres and 44 tonnes. This is a political and economic choice with considerable material effects: thickness, rigidity of sub‑bases and consolidation of soil and subsoil with lime and cement. Altogether, motorways are 10 times thicker than large roads from the late 19th century. In France, it takes an average of 30 tonnes of sand and aggregate to build one linear metre of motorway, 100 times more than cement and bitumen.  The material history of road networks is a history of quarrying and environmental damage. The traces of roads can also be seen in rivers emptied of their sediment, the notches of quarries in the hills and the furrows of dredgers extracting sand from the seabed. This material extraction, arguably the most significant in human history, has dramatic ecological consequences for rivers, groundwater tables, the rise of sea levels and saltwater in farmlands, as well as biodiversity. As sand is ubiquitous and very cheap, the history of roads is also the history of a local extractivism and environmental conflicts around the world.  Shoving and rutting is the bulging and rippling of the pavement surface. Once built, roads require extensive maintenance – the heavier the vehicles, the quicker the damage. From pothole repair to the full resurfacing of a road, maintenance contributes to keeping road users safe Credit: Yakov Oskanov / Alamy Once roads are built and extended, they need to be maintained to support the circulation of lorries and, by extension, commodities. This stage is becoming increasingly important as rail freight, which used to be important in countries such as France and the UK, is declining, accounting for no more than 10 per cent of the transport of commodities. Engineers might judge that a motorway is destined to last 20 years or so, but this prognosis will be significantly reduced with heavy traffic. The same applies to the thousands of motorway bridges: in the UK, nearly half of the 9,000 highway bridges are in poor condition; in France, 7 per cent of the 12,000 bridges are in danger of collapsing, as did Genoa’s Morandi bridge in 2018. If only light vehicles drove on it, this infrastructure would last much longer. This puts into perspective governments’ insistence on ‘greening’ the transport sector by targeting CO2 emissions alone, typically by promoting the use of electric vehicles. Public policies prioritising EVs do nothing to change the mass of roads or the issue of their maintenance – even if lorries were to run on clean air, massive quarrying would still be necessary. A similar argument plays out with regard to canals and ports, which have been constantly widened and deepened for decades to accommodate ever-larger oil tankers or container ships. The simple operation of these infrastructures, dimensioned for the circulation of commodities and not humans, requires permanent dredging of large volumes. The environmental problem of large transport infrastructure goes beyond the type of energy used: it is, at its root, free and globalised trade. ‘The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing’ As both a material and ideological object, the motorway fixes certain political choices in the landscape. Millions of kilometres of road continue to be asphalted, widened and thickened around the world to favour cars and lorries. In France, more than 80 per cent of today’s sand and aggregate extraction is used for civil engineering works – the rest goes to buildings. Even if no more buildings, roads or other infrastructures were to be built, phenomenal quantities of sand and aggregates would still need to be extracted in order to maintain existing road networks. The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing, adding new structures such as wildlife crossings, more maintaining.  Rising traffic levels are always deemed positive by governments for a country’s economy and development. As Christopher Wells shows in his 2014 book Car Country: An Environmental History, car use becomes necessary in an environment where everything has been planned for the car, from the location of public services and supermarkets to residential and office areas. Similarly, when an entire economy is based on globalised trade and just‑in‑time logistics, the lorry and the container ship become vital.  The final stage in the life of a piece of motorway infrastructure is dismantling. Like the other stages, this one is not a natural outcome but the fruit of political choices – which should be democratic – regarding how we wish to use existing roads. Dismantling, which is essential if we are to put an end to the global extractivism of sand and aggregates, does not mean destruction: if bicycles and pedestrians were to use them instead, maintenance would be minimal. This final stage requires a paradigm shift away from the eternal adaptation to increasing traffic. Replacing cars and lorries with public transport and rail freight would be a first step. But above all, a different political and spatial organisation of economic activities is necessary, and ultimately, an end to globalised, just-in-time trade and logistics. In 1978, a row of cars parked at a shopping centre in Connecticut was buried under a thick layer of gooey asphalt. The Ghost Parking Lot, one of the first projects by James Wines’ practice SITE, became a playground for skateboarders until it was removed in 2003. Images of this lumpy landscape serve as allegories of the damage caused by reliance on the automobile Credit: Project by SITE Lead image: Some road damage is beyond repair, as when a landslide caused a large chunk of the Gothenburg–Oslo motorway to collapse in 2023. Such dramatic events remind us of both the fragility of these seemingly robust infrastructures, and the damage that extensive construction does to the planet. Credit: Hanna Brunlöf Windell / TT / Shutterstock 2025-06-03 Reuben J Brown Share #how #much #does #your #road
    WWW.ARCHITECTURAL-REVIEW.COM
    How much does your road weigh?
    The ways roads are used, with ever larger and heavier vehicles, have dramatic consequences on the environment – and electric cars are not the answer Today, there is an average of 37 tonnes of road per inhabitant of the planet. The weight of the road network alone accounts for a third of all construction worldwide, and has grown exponentially in the 20th century. There is 10 times more bitumen, in mass, than there are living animals. Yet growth in the mass of roads does not automatically correspond to population growth, or translate into increased length of road networks. In wealthier countries, the number of metres of road per inhabitant has actually fallen over the last century. In the United States, for instance, between 1905 and 2015 the length of the network increased by a factor of 1.75 and the population by a factor of 3.8, compared with 21 for the mass of roads. Roads have become wider and, above all, much thicker. To understand the evolution of these parameters, and their environmental impact, it is helpful to trace the different stages in the life of the motorway.  Until the early 20th century, roads were used for various modes of transport, including horses, bicycles, pedestrians and trams; as a result of the construction of railways, road traffic even declined in some European countries in the 19th century. The main novelty brought by the motorway was that they would be reserved for motorised traffic. In several languages, the word itself – autostrada, autobahn, autoroute or motorway – speaks of this exclusivity.  Roman roads varied from simple corduroy roads, made by placing logs perpendicular to the direction of the road over a low or swampy area, to paved roads, as this engraving from Jean Rondelet’s 19th‑century Traité Théorique et Pratique de l’Art de Bâtir shows. Using deep roadbeds of tamped rubble as an underlying layer to ensure that they kept dry, major roads were often stone-paved, metalled, cambered for drainage and flanked by footpaths, bridleways and drainage ditches Like any major piece of infrastructure, motorways became the subject of ideological discourse, long before any shovel hit the ground; politicians underlined their role in the service of the nation, how they would contribute to progress, development, the economy, modernity and even civilisation. The inauguration ceremony for the construction of the first autostrada took place in March 1923, presided over by Italy’s prime minister Benito Mussolini. The second major motorway programme was announced by the Nazi government in 1933, with a national network planned to be around 7,000 kilometres long. In his 2017 book Driving Modernity: Technology, Experts, Politics, and Fascist Motorways, 1922–1943, historian Massimo Moraglio shows how both programmes were used as propaganda tools by the regimes, most notably at the international road congresses in Milan in 1926 and Munich in 1934. In the European postwar era, the notion of the ‘civilising’ effect of roads persevered. In 1962, Valéry Giscard d’Estaing, then‑secretary of state for finances and later president of France, argued that expanded motorways would bring ‘progress, activity and life’. This discourse soon butted up against the realities of how motorways affected individuals and communities. In his 2011 book Fighting Traffic: The Dawn of the Motor Age in the American City, Peter D Norton explores the history of resistance to the imposition of motorised traffic in North American cities. Until the 1920s, there was a perception that cars were dangerous newcomers, and that other street and road uses – especially walking – were more legitimate. Cars were associated with speed and danger; restrictions on motorists, especially speed limits, were routine.  Built between 1962 and 1970, the Westway was London’s first urban motorway, elevated above the city to use less land. Construction workers are seen stressing the longitudinal soffit cables inside the box section of the deck units to achieve the bearing capacity necessary to carry the weight of traffic Credit: Heritage Image Partnership Ltd / Alamy To gain domination over cities, motor vehicles had to win priority over other street uses. Rather than restricting the flow of vehicles to minimise the risk of road accidents, a specific infrastructure was dedicated to them: both inner‑city roads and motorways. Cutting through the landscape, the motorway had, by definition, to be inaccessible by any other means of transport than motorised vehicle. To guarantee the fluidity of traffic, the construction of imposing bridges, tunnels and interchanges is necessary, particularly at junctions with other roads, railways or canals. This prioritisation of one type of user inevitably impacts journeys for others; as space is fragmented, short journeys are lengthened for those trying to navigate space by foot or bicycle.  Enabling cars to drive at around 110–140km/h on motorways, as modern motorways do, directly impacts their design, with major environmental effects: the gradient has to be gentle (4 per cent), the curves long (1.5km in radius) and the lanes wide, to allow vehicles to overtake each other safely. As much terrain around the world is not naturally suited to these requirements, the earthworks are considerable: in France, the construction of a metre of highway requires moving some 100m3 of earth, and when the soil is soft, full of clay or peat, it is made firmer with hydraulic lime and cement before the highway’s first sub‑layers are laid. This material cost reinforces the criticisms levelled in the 1960s, by the likes of Jane Jacobs and Lewis Mumford, at urban planning that prioritised the personal motor vehicle. When roads are widened to accommodate more traffic, buildings are sliced and demolished, as happened in Dhaka’s Bhasantek Road in 2021 Credit: Dhaka Tribune Once built, the motorway is never inert. Motorway projects today generally anticipate future expansion (from 2×2 to 2×3 to 2×4 lanes), and include a large median strip of 12m between the lanes, with a view to adding new ones. Increases in speed and vehicle sizes have also translated into wider lanes, from 2.5m in 1945 to 3.5m today. The average contemporary motorway footprint is therefore 100 square metres per linear metre. Indeed, although the construction of a road is supposed to reduce congestion, it also generates new traffic and, therefore, new congestion. This is the principle of ‘induced traffic’: the provision of extra road capacity results in a greater volume of traffic. The Katy Freeway in Texas famously illustrates this dynamic. Built as a regular six‑lane highway in the 1960s, it was called the second worst bottleneck in the nation by 2004, wasting 25 million hours a year of commuter time. In 2011, the state of Texas invested US$2.8 billion to fix this problem, widening the road to a staggering total of 26 lanes. By 2014, the morning and afternoon traffic had both increased again. The vicious circle based on the induced traffic has been empirically demonstrated in most countries: traffic has continued to increase and congestion remains unresolved, leading to ever-increasing emissions. In the EU, transport is the only sector where greenhouse gas emissions have increased in the past three decades, rising 33.5 per cent between 1990 and 2019. Transport accounts for around a fifth of global CO₂ emissions today, with three quarters of this figure linked to road transport. Houston’s Katy Freeway is one of the world’s widest motorways, with 26 lanes. Its last expansion, in 2008, was initially hailed as a success, but within five years, peak travel times were longer than before the expansion – a direct illustration of the principle of induced traffic Credit: Smiley N Pool / Houston Chronicle / Getty Like other large transport infrastructures such as ports and airports, motorways are designed for the largest and heaviest vehicles. Engineers, road administrations and politicians have known since the 1950s that one truck represents millions of cars: the impact of a vehicle on the roadway is exponential to its weight – an online ‘road damage calculator’ allows you to compare the damage done by different types of vehicles to the road. Over the years, heavier and heavier trucks have been authorised to operate on roads: from 8‑tonne trucks in 1945 to 44 tonnes nowadays. The European Parliament adopted a revised directive on 12 March 2024 authorising mega‑trucks to travel on European roads; they can measure up to 25 metres and weigh up to 60 tonnes, compared with the previous limits of 18.75 metres and 44 tonnes. This is a political and economic choice with considerable material effects: thickness, rigidity of sub‑bases and consolidation of soil and subsoil with lime and cement. Altogether, motorways are 10 times thicker than large roads from the late 19th century. In France, it takes an average of 30 tonnes of sand and aggregate to build one linear metre of motorway, 100 times more than cement and bitumen.  The material history of road networks is a history of quarrying and environmental damage. The traces of roads can also be seen in rivers emptied of their sediment, the notches of quarries in the hills and the furrows of dredgers extracting sand from the seabed. This material extraction, arguably the most significant in human history, has dramatic ecological consequences for rivers, groundwater tables, the rise of sea levels and saltwater in farmlands, as well as biodiversity. As sand is ubiquitous and very cheap, the history of roads is also the history of a local extractivism and environmental conflicts around the world.  Shoving and rutting is the bulging and rippling of the pavement surface. Once built, roads require extensive maintenance – the heavier the vehicles, the quicker the damage. From pothole repair to the full resurfacing of a road, maintenance contributes to keeping road users safe Credit: Yakov Oskanov / Alamy Once roads are built and extended, they need to be maintained to support the circulation of lorries and, by extension, commodities. This stage is becoming increasingly important as rail freight, which used to be important in countries such as France and the UK, is declining, accounting for no more than 10 per cent of the transport of commodities. Engineers might judge that a motorway is destined to last 20 years or so, but this prognosis will be significantly reduced with heavy traffic. The same applies to the thousands of motorway bridges: in the UK, nearly half of the 9,000 highway bridges are in poor condition; in France, 7 per cent of the 12,000 bridges are in danger of collapsing, as did Genoa’s Morandi bridge in 2018. If only light vehicles drove on it, this infrastructure would last much longer. This puts into perspective governments’ insistence on ‘greening’ the transport sector by targeting CO2 emissions alone, typically by promoting the use of electric vehicles (EVs). Public policies prioritising EVs do nothing to change the mass of roads or the issue of their maintenance – even if lorries were to run on clean air, massive quarrying would still be necessary. A similar argument plays out with regard to canals and ports, which have been constantly widened and deepened for decades to accommodate ever-larger oil tankers or container ships. The simple operation of these infrastructures, dimensioned for the circulation of commodities and not humans, requires permanent dredging of large volumes. The environmental problem of large transport infrastructure goes beyond the type of energy used: it is, at its root, free and globalised trade. ‘The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing’ As both a material and ideological object, the motorway fixes certain political choices in the landscape. Millions of kilometres of road continue to be asphalted, widened and thickened around the world to favour cars and lorries. In France, more than 80 per cent of today’s sand and aggregate extraction is used for civil engineering works – the rest goes to buildings. Even if no more buildings, roads or other infrastructures were to be built, phenomenal quantities of sand and aggregates would still need to be extracted in order to maintain existing road networks. The material life cycle of motorways is relentless: constructing, maintaining, widening, thickening, repairing, adding new structures such as wildlife crossings, more maintaining.  Rising traffic levels are always deemed positive by governments for a country’s economy and development. As Christopher Wells shows in his 2014 book Car Country: An Environmental History, car use becomes necessary in an environment where everything has been planned for the car, from the location of public services and supermarkets to residential and office areas. Similarly, when an entire economy is based on globalised trade and just‑in‑time logistics (to the point that many service economies could not produce their own personal protective equipment in the midst of a pandemic), the lorry and the container ship become vital.  The final stage in the life of a piece of motorway infrastructure is dismantling. Like the other stages, this one is not a natural outcome but the fruit of political choices – which should be democratic – regarding how we wish to use existing roads. Dismantling, which is essential if we are to put an end to the global extractivism of sand and aggregates, does not mean destruction: if bicycles and pedestrians were to use them instead, maintenance would be minimal. This final stage requires a paradigm shift away from the eternal adaptation to increasing traffic. Replacing cars and lorries with public transport and rail freight would be a first step. But above all, a different political and spatial organisation of economic activities is necessary, and ultimately, an end to globalised, just-in-time trade and logistics. In 1978, a row of cars parked at a shopping centre in Connecticut was buried under a thick layer of gooey asphalt. The Ghost Parking Lot, one of the first projects by James Wines’ practice SITE, became a playground for skateboarders until it was removed in 2003. Images of this lumpy landscape serve as allegories of the damage caused by reliance on the automobile Credit: Project by SITE Lead image: Some road damage is beyond repair, as when a landslide caused a large chunk of the Gothenburg–Oslo motorway to collapse in 2023. Such dramatic events remind us of both the fragility of these seemingly robust infrastructures, and the damage that extensive construction does to the planet. Credit: Hanna Brunlöf Windell / TT / Shutterstock 2025-06-03 Reuben J Brown Share
    Like
    Love
    Wow
    Sad
    Angry
    153
    0 Σχόλια 0 Μοιράστηκε
  • HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION

    By CHRIS McGOWAN

    This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging.
    A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award.

    This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.”
    —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.”

    This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA
    While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.”
    He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.”

    Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS
    “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.”

    Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.”

    While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA
    Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.”
    He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.”

    Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.”

    The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES
    The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.”
    SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public.

    An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    #hollywood #vfx #tools #space #exploration
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.” #hollywood #vfx #tools #space #exploration
    WWW.VFXVOICE.COM
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCam (Near-Infrared Camera) shows stunning details of the majestic planet in infrared light. (Image courtesy of NASA, ESA and CSA) Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey (1968). Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studio (SVS) produces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studio (SVS) at the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization Studio (SVS) About his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) The Gulf Stream and connected currents. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars. (Image courtesy of NASA’s Goddard Space Flight Center) WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatory (SDO) shows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between. (Image courtesy of ASA/JPL/Space Science Institute) The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar. (Image courtesy of DNEG and Paramount Pictures) Another visualization of the black hole Gargantua. (Image courtesy of DNEG and Paramount Pictures) INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar (2014). “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet [in the film]. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Lab (CIL) and Goddard Media Studios (GMS) to publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars. Hellas Basin can be seen in the lower right portion of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars slightly tilted to show the Martian North Pole. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    Like
    Love
    Wow
    Angry
    Sad
    144
    0 Σχόλια 0 Μοιράστηκε
  • There may be a surprising upside to losing coral reefs as oceans warm

    Satellite view of coral reefs in New CaledoniaShutterstock/BEST-BACKGR​OUNDS
    There might be an upside to the loss of coral reefs. Their decline would mean oceans can absorb up to 5 per cent more carbon dioxide by 2100, researchers estimate, slowing the build up of this greenhouse gas in Earth’s atmosphere.
    “It is a beneficial effect if you only care about the concentration of CO2 in the atmosphere,” says Lester Kwiatkowski at Sorbonne University in Paris, France. But the decline of corals will also reduce biodiversity, harm fisheries and leave many coasts more exposed to rising seas, he says.
    Advertisement
    How much the world will warm depends mainly on the level of CO2 in the atmosphere. So far the land and oceans have been soaking up around half of the extra CO2 we have emitted. Any factors that increase or decrease these so-called land or ocean carbon sinks could therefore have a significant impact on future warming.
    It is often assumed that corals remove CO2 from seawater as they grow their calcium carbonate skeletons. In fact, the process, also known as calcification, is a net source of CO2.
    “You’re taking inorganic carbon in the ocean, generally in the form of carbonate and bicarbonate ions, turning it into calcium carbonate and that process releases CO2 into the seawater, some of which will be lost to the atmosphere,” says Kwiatkowski.

    Unmissable news about our planet delivered straight to your inbox every month.

    Sign up to newsletter

    This means that if reef formation around the world slows or even reverses, less CO2 will be released by reefs and the oceans will be able to absorb more of this greenhouse gas from the atmosphere – a factor not currently included in climate models.
    Observations suggest coral reef calcification is already declining as rising seawater temperatures cause mass coral bleaching and die-offs. The higher level of CO2 is also making oceans more acidic, which can make it harder to build carbonate skeletons and even lead to their dissolution.
    Kwiatkowski and his team took published estimates of how corals will be affected by warming and ocean acidification and used a computer model to work out how this might change the ocean sink in various emission scenarios. They conclude that the oceans could take up between 1 and 5 per cent more carbon by 2100, and up to 13 per cent more by 2300.
    This doesn’t take account of other factors that can cause reef decline such as overfishing and the spread of coral diseases, says Kwiatkowski, so might even be an underestimate.

    On the other hand, the work assumes that corals aren’t able to adapt or acclimatise, says Chris Jury at the University of Hawai’i at Manoa, who wasn’t involved in the study.
    “If the worst-case or even medium-case scenario in this study comes to pass, it means the near-total destruction of coral reefs globally,” says Jury. “I think that with consideration of realistic levels of adaptation and acclimatisation by corals and other reef organisms, the authors might come to different conclusions under a low to moderate level of climate change.”
    If Kwiatkowski’s team is correct, it means that the amount of emitted CO2 that will lead to a given level of warming – the so-called carbon budget – is a little larger than currently thought.
    “I think we would like our budgets to be as accurate as possible, even if we’re blowing through them,” says Kwiatkowski.
    Journal reference:PNAS DOI: 10.1073/pnas.2501562122
    Topics:
    #there #surprising #upside #losing #coral
    There may be a surprising upside to losing coral reefs as oceans warm
    Satellite view of coral reefs in New CaledoniaShutterstock/BEST-BACKGR​OUNDS There might be an upside to the loss of coral reefs. Their decline would mean oceans can absorb up to 5 per cent more carbon dioxide by 2100, researchers estimate, slowing the build up of this greenhouse gas in Earth’s atmosphere. “It is a beneficial effect if you only care about the concentration of CO2 in the atmosphere,” says Lester Kwiatkowski at Sorbonne University in Paris, France. But the decline of corals will also reduce biodiversity, harm fisheries and leave many coasts more exposed to rising seas, he says. Advertisement How much the world will warm depends mainly on the level of CO2 in the atmosphere. So far the land and oceans have been soaking up around half of the extra CO2 we have emitted. Any factors that increase or decrease these so-called land or ocean carbon sinks could therefore have a significant impact on future warming. It is often assumed that corals remove CO2 from seawater as they grow their calcium carbonate skeletons. In fact, the process, also known as calcification, is a net source of CO2. “You’re taking inorganic carbon in the ocean, generally in the form of carbonate and bicarbonate ions, turning it into calcium carbonate and that process releases CO2 into the seawater, some of which will be lost to the atmosphere,” says Kwiatkowski. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter This means that if reef formation around the world slows or even reverses, less CO2 will be released by reefs and the oceans will be able to absorb more of this greenhouse gas from the atmosphere – a factor not currently included in climate models. Observations suggest coral reef calcification is already declining as rising seawater temperatures cause mass coral bleaching and die-offs. The higher level of CO2 is also making oceans more acidic, which can make it harder to build carbonate skeletons and even lead to their dissolution. Kwiatkowski and his team took published estimates of how corals will be affected by warming and ocean acidification and used a computer model to work out how this might change the ocean sink in various emission scenarios. They conclude that the oceans could take up between 1 and 5 per cent more carbon by 2100, and up to 13 per cent more by 2300. This doesn’t take account of other factors that can cause reef decline such as overfishing and the spread of coral diseases, says Kwiatkowski, so might even be an underestimate. On the other hand, the work assumes that corals aren’t able to adapt or acclimatise, says Chris Jury at the University of Hawai’i at Manoa, who wasn’t involved in the study. “If the worst-case or even medium-case scenario in this study comes to pass, it means the near-total destruction of coral reefs globally,” says Jury. “I think that with consideration of realistic levels of adaptation and acclimatisation by corals and other reef organisms, the authors might come to different conclusions under a low to moderate level of climate change.” If Kwiatkowski’s team is correct, it means that the amount of emitted CO2 that will lead to a given level of warming – the so-called carbon budget – is a little larger than currently thought. “I think we would like our budgets to be as accurate as possible, even if we’re blowing through them,” says Kwiatkowski. Journal reference:PNAS DOI: 10.1073/pnas.2501562122 Topics: #there #surprising #upside #losing #coral
    WWW.NEWSCIENTIST.COM
    There may be a surprising upside to losing coral reefs as oceans warm
    Satellite view of coral reefs in New CaledoniaShutterstock/BEST-BACKGR​OUNDS There might be an upside to the loss of coral reefs. Their decline would mean oceans can absorb up to 5 per cent more carbon dioxide by 2100, researchers estimate, slowing the build up of this greenhouse gas in Earth’s atmosphere. “It is a beneficial effect if you only care about the concentration of CO2 in the atmosphere,” says Lester Kwiatkowski at Sorbonne University in Paris, France. But the decline of corals will also reduce biodiversity, harm fisheries and leave many coasts more exposed to rising seas, he says. Advertisement How much the world will warm depends mainly on the level of CO2 in the atmosphere. So far the land and oceans have been soaking up around half of the extra CO2 we have emitted. Any factors that increase or decrease these so-called land or ocean carbon sinks could therefore have a significant impact on future warming. It is often assumed that corals remove CO2 from seawater as they grow their calcium carbonate skeletons. In fact, the process, also known as calcification, is a net source of CO2. “You’re taking inorganic carbon in the ocean, generally in the form of carbonate and bicarbonate ions, turning it into calcium carbonate and that process releases CO2 into the seawater, some of which will be lost to the atmosphere,” says Kwiatkowski. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter This means that if reef formation around the world slows or even reverses, less CO2 will be released by reefs and the oceans will be able to absorb more of this greenhouse gas from the atmosphere – a factor not currently included in climate models. Observations suggest coral reef calcification is already declining as rising seawater temperatures cause mass coral bleaching and die-offs. The higher level of CO2 is also making oceans more acidic, which can make it harder to build carbonate skeletons and even lead to their dissolution. Kwiatkowski and his team took published estimates of how corals will be affected by warming and ocean acidification and used a computer model to work out how this might change the ocean sink in various emission scenarios. They conclude that the oceans could take up between 1 and 5 per cent more carbon by 2100, and up to 13 per cent more by 2300. This doesn’t take account of other factors that can cause reef decline such as overfishing and the spread of coral diseases, says Kwiatkowski, so might even be an underestimate. On the other hand, the work assumes that corals aren’t able to adapt or acclimatise, says Chris Jury at the University of Hawai’i at Manoa, who wasn’t involved in the study. “If the worst-case or even medium-case scenario in this study comes to pass, it means the near-total destruction of coral reefs globally,” says Jury. “I think that with consideration of realistic levels of adaptation and acclimatisation by corals and other reef organisms, the authors might come to different conclusions under a low to moderate level of climate change.” If Kwiatkowski’s team is correct, it means that the amount of emitted CO2 that will lead to a given level of warming – the so-called carbon budget – is a little larger than currently thought. “I think we would like our budgets to be as accurate as possible, even if we’re blowing through them,” says Kwiatkowski. Journal reference:PNAS DOI: 10.1073/pnas.2501562122 Topics:
    0 Σχόλια 0 Μοιράστηκε
  • Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3

    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience.
    This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection.
    Designer: SwitchBot
    Click Here to Buy Now: The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily.

    Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator.

    One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar, it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying.

    Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensormakes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades.

    Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels.

    The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use.

    Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert.

    Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around.

    The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home.

    The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade.
    Click Here to Buy Now: The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design.
    #your #smart #home #got #new
    Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3
    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience. This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection. Designer: SwitchBot Click Here to Buy Now: The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily. Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator. One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar, it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying. Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensormakes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades. Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels. The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use. Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert. Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around. The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home. The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade. Click Here to Buy Now: The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design. #your #smart #home #got #new
    WWW.YANKODESIGN.COM
    Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3
    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience. This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection. Designer: SwitchBot Click Here to Buy Now: $119.99 The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily. Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator. One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar (think ovens, radios, car ACs), it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying. Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensor (cleverly integrated into its cable) makes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades. Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels. The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use. Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert. Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around. The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home. The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade. Click Here to Buy Now: $119.99The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design.
    0 Σχόλια 0 Μοιράστηκε
  • The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be

    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting.That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsiusof warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article
    #carbon #removal #industry #already #lagging
    The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be
    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting.That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsiusof warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article #carbon #removal #industry #already #lagging
    FUTURISM.COM
    The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be
    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting (more on these carbon credit schemes in a moment).That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsius (2.7 degrees Fahrenheit) of warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article
    0 Σχόλια 0 Μοιράστηκε