• WWW.SCIENTIFICAMERICAN.COM
    This Butterfly’s Epic Migration Is Written into Its Chemistry
    April 13, 20252 min readThis Butterfly’s Epic Migration Is Written into Its ChemistryPainted ladies travel the globe every year on massive journeys—including across the SaharaBy Jesse Greenspan edited by Sarah Lewin Frasier Jim Mayes/Getty ImagesPainted ladies are the ultramarathoners of the butterfly world—even more so than monarchs. Scientists have long known about their globetrotting tendencies, but only recently have their exact migratory routes come into focus.Over several generations the butterflies can fly up to 9,300 miles annually from Scandinavia to equatorial Africa and back. Although not every painted lady travels widely, researchers recently detailed in PNAS Nexus that certain individuals fly up to 2,500 miles from Europe to overwintering grounds in the African Sahel, journeying over the Mediterranean Sea and the Sahara Desert on the way. A few even inadvertently cross the Atlantic Ocean to South America, other researchers found. In North America, meanwhile, painted ladies flutter between Mexico and Canada. In Asia, they’ve even been spotted cutting through the Himalayas.“They’re not passive riders on the wind,” says Arthur M. Shapiro, an emeritus lepidopterist at the University of California, Davis. “They’re directing themselves.” In ideal breeding conditions, “the air is just completely full of them,” he adds.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Weighing less than a gram, painted ladies are too light to affix traditional tracking devices to. For the new study, University of Ottawa ecologist Megan S. Reich and her colleagues captured 40 butterflies and discerned their far-off birthplaces based on variants, or isotopes, of the chemical elements hydrogen and strontium in their wings—thereby finding the true long-haulers.“Sometimes people think of butterflies as really fragile, ephemeral creatures,” Reich says. “But they can be quite hardy.” Painted ladies are particularly well suited to long-distance travel. No matter the location, innumerable host plants provide them with food. When it gets cold, they shiver to generate body heat. Their triangular forewings propel them at up to 30 miles per hour. Powered by yellow fat reserves, they can fly so high that, until the 2000s, people in the U.K. virtually never observed them leaving the country and therefore thought they might be dying off each winter.Yet painted ladies are not unique. Hundreds, if not thousands, of insect species most likely migrate, including dragonflies that cross the Indian Ocean, moths that traverse Australia and plant hoppers that windsurf through East Asia. “There are some incredible insect migrations,” Reich says, most of which are never recorded.
    0 Commentaires 0 Parts 29 Vue
  • WWW.EUROGAMER.NET
    Stress-testing DLSS 4's super resolution transformer technology
    Stress-testing DLSS 4's super resolution transformer technology Can Nvidia's superb new upscaler overcome all of our historical issues? Feature by Alex Battaglia Video Producer, Digital Foundry Published on April 13, 2025 Debuting alongside the new Blackwell GPU architecture, Nvidia gifted a remarkable new technology to owners of all existing RTX GPUs - the DLSS 4 transformer model. We've already talked about how the new ray reconstruction produces some outstanding results, but what about upscaling, or super resolution? Outlets like Hardware Unboxed have already put out excellent analysis of the new DLSS, so we went for a slightly different approach. Based on our years of testing games and isolating specific DLSS issues within these titles, we decided to go back and re-test DLSS pain points, swapping out the old convolutional neural network (CNN) model for the brand-new transformer alternative with a view to seeing how well the new technology copes in known trouble spots. The concept of being able to improve existing games with brand-new DLSS technology is wonderful - but how do you do it? Well, new games offer the choice between CNN and transformer models within their menu systems, while others have been patched. Beyond that, Nvidia has added a function into its new app that does the same job. This is handy when it works, but sometimes it does not - like in Assassin's Creed Shadows for example. Thankfully, third party apps have been injecting new DLSS into older games for some time now, and DLSS Tweaker is a good alternative. With this you merely grab the latest DLSS super res DLL from a place like Tech Power-Up, drop it into the targeted game folder, click on DLSS tweaker's config .exe and then change the DLSS version to preset model "K" - which is the latest transformer model. It's slightly more convoluted but easy enough to get to grips with - but ultimately, Nvidia really needs to address the issue properly within its app. In terms of our test suite - well, this is definitely a situation where watching is more illuminating than reading, so do check out the video above for actual head-to-head comparisons - but before we go deep into the weeds, let's not forget that the transformer model has been proven to be more effective in terms of quality, albeit with a larger hit to resources than the CNN alternative. However, those fewer frames are traded for higher image quality, to the point where in many cases, you can comfortably lower the DLSS setting while still achieving the same - or better - image quality. They say a picture tells a thousand words - and in the case of image quality comparisons, video even more so.Watch on YouTube However, the focus of our testing is to see just how well the transformer model improves over known DLSS drawbacks and we start with Death Stranding, or more specifically the Director's Cut, one of the earliest DLSS 2 games. We tested at 1440p resolution with the balanced mode and it's obvious to see improved detail with a lower level of softness, without introducing aliasing. It's a big improvement. However, there was a weakness: rain around Sam Porter, with DLSS virtually eliminating rain around his backpack - which is there in super-sampled 'ground truth' comparisons. DLSS was taking those singular reflective points where rain drops hit that and most likely interpreting them as image noise or flicker, cleaning them away, although they are a part of the game's art. However, transformer actually seems to make the situation worse: the rain drops are still not perhaps as bright as they should be, in fact, they are almost harder to see than they were before. It requires much more input resolution for the effect to resolve - as seen with DLAA. There's a net boost to quality here but clearly, there are still limits. In God of War Ragnarok, I originally noted how you can see trails coming from thin objects flush against the sky - and in motion it almost looks like those objects are 'smoking' a little. Here, I noted that the artefacting is still there but is significantly reduced with the transformer model. I saw a similar issue with Red Dead Redemption, and there's still similar - if not markedly better - improvement here with the transformer model too. On the same game, I also noted dithering problems with hair - an issue that's common to many titles, actually. The good news here is that once again, the transformer model offers a big improvement. Coming over to Forza Horizon 5 - this game was updated with DLSS post release, and while it was a welcome addition it was not without issues. The first issue was with the telegraph wires above the tracks in the world. With DLSS on and using the CNN model, you would often see a break-up issue similar to God of War or Red Dead Redemption 2, but more intense. The wire in these games appears to be made of real geometry, and it is often sub-pixel nearly the entire time it is on-screen, especially at 1440p or lower. So DLSS would typically make a mess of them - flickering and with break-up, a stark reminder of the base resolution. To see this content please enable targeting cookies. Flipping on the transformer model, this is still clearly a problem. Either CNN or transformer can look better depending on the content and both do not pass for native resolution rendering. This type of detail is extremely difficult to reconstruct and only by using MSAA at native resolution is the break-up taken care of - a stark reminder that Forza Horizon 5 was built with this kind of anti-aliasing in mind. By extension, super-sampling also works - a brute force form of AA. However, Forza Horizon 5 does illustrate a strength of the transformer model - a lessening of soft detail in motion inherent to pretty much all forms of temporal anti-aliasing. This elimination of detail softening is one of the key advantages of switching to the transformer model, though it will be more or less visible depending on your display type - something like an OLED or strobing display will make the transformer model's enhanced clarity during camera movement more obvious than an LCD or other display that has larger image persistence issues. Next up, I was curious to check out how ray traced reflections resolve in the Nixxes ports for insomniac games, like Ratchet and Clank. There, when the ray traced reflections were set to high quality, they would be checkerboarded to save performance just like on the console version. The problem is that these reflections would resolve with big chunky pixels on any more mirror-like surface. Curiously, this only happens with DLSS and not with any of the other upscalers. With that in mind, this sounds like a game-specific problem as opposed to something off with DLSS - borne out by the fact that swapping over to the transformer model does not help. The last game specific issue I want to look at concerns Dragons Dogma 2. At launch, I noticed how grass ghosted heavily in movement when the wind picked up. In this scenario, the grass tends to ghost into itself, looking smeary and unfocused, with all semblance of individual blades of grass completely disappearing. Flipping on the transformer model, there is a difference and I would say it is largely positive. Grass blades in stronger wind movement now keep their form much better and ghosting is eliminated. That's great but as a negative side effect I think the grass now looks more strongly aliased when it does bend rapidly in the wind - which in aggregate looks fizzly - ghosting is not good of course, but it did mask aliasing due to the blur. Overall though, I would say this is a win for the transformer model. The DLSS 4 transformer model is superb with ray reconstruction - our first in-depth look at Nvidia's updated reconstruction technology.Watch on YouTube Altogether going back and applying the transformer model over the legacy DLSS CNN model yields a lot of positive results, in some cases removing prior issues completely. However, it's not a silver bullet or a complete cure-all for prior DLSS issues. The rain drops in Death Stranding, for example, suggest that sometimes the base resolution is still insufficient, or - in the case of Ratchet and Clank - DLSS is not being fed correct inputs. Another thing I noticed when applying the transformer model to older games is that there can be image quality regressions. For example, in Control, I found that the transformer model has issues with the game's ray tracing and working in combination with Jesse's hair, adding extra noise over area lights. Presumably, the transformer model does not like the diffuse ray tracing here. Another issue I saw in many games was an increase in disocclusion issues over the previous model. For example, in Dragon's Dogma 2, the game may look better overall with the new DLSS, but you can see that the area trailing the character's head while running fizzles in the new model. In spite of its many faults, the older CNN model does not have the same problem. The last issue I found is more easily seen in a title like Assassins Creed Shadows. The transformer model seems to have some issues with volumetric fog in general at times, making it so that objects transitioning into the fog ghost heavily, while the fog itself shows off a stippled ordered grid look. So, the transformer model is excellent and offers profound improvements in many categories but it also has issues at the moment which prevent it from being appropriate for all games. That said, this is the very first iteration of the new DLSS transformer model, so I would expect improvements. Nvidia says that the older CNN technology has effectively run its course, with only iterative improvements since its 2020 debut - while the sky's the limit for the transformer technology. Alongside further improvements to ray reconstruction and frame generation, we'll be following super resolution's continued progress with much interest.
    0 Commentaires 0 Parts 28 Vue
  • WWW.ARCHITECTURALDIGEST.COM
    The Righteous Gemstones’ Ungodly McMansions Will Redeem You From Luxury TV Overload
    Patriarch preacher Eli Gemstone (John Goodman) is of relatively new money, having established a Joel Osteen–esque empire with his late wife Aimee-Leigh in the past several decades. His manse leans more traditional, though it comes with its fair share of grand gauche elements, like a gratuitous memorial garden and a megawatt gold front door. In tracing the roots of the Gemstone look, “you could google ‘new money decoration’ to be as baseline as that, but then also there’s a little pantheon of people that we referred to,” set decorator Patrick Cassidy says, citing famous megachurch power couples like Jim and Tammy Faye Bakker and Jimmy and Frances Swaggart as some real-life figures in whose image the Gemstones were made.Jesse Gemstone’s houseEli’s three children maintain their own estates on the family’s South Carolina compound, and their respective dwellings are where the show’s delightfully tacky design truly shines through. While neither of the megachurch founder’s two adult failsons seem a fitting choice for Eli’s successor, Jesse Gemstone (played by show creator Danny McBride) is the eldest and sees himself stepping into his father’s shoes as head of the congregation, which is illustrated in his home’s design. Jesse’s desire to be regarded as a powerful alpha male manifests in features meant to shout his wealth and importance from the pad’s rooftop, literally.“He wants to be respected. He’s always just like, ‘Wait, why am I not in charge?’ Obviously he’s ready to take the throne,” production designer Richard Wright tells AD, noting that Jesse’s home nods to his father’s aesthetic but with a more contemporary bent. “Jesse’s house is one of my favorite exteriors because it has so many weird features that are unclear why they’re there, all sorts of cupolas and points. It’s a pretty detailed McMansion design.”The cavernous residence’s ornate interior flourishes are meant to signify well-to-do status, but not refined taste. Photo: Courtesy HBOJudy Gemstone’s houseFor the Gemstone brood’s only daughter, Judy (Edi Patterson), a more coherently inelegant vision takes shape: “Judy for me was the angriest little rich girl who made the most money in the world: ultrafeminine, a lot of pink and icy blues,” Cassidy says. It’s tough to pick favorite features between the throw pillows emblazoned with Magic Photo glamour shots of herself and husband BJ (Tim Baltz) and taxidermy arranged in action poses, but the explicit painting of Judy and BJ as Adam and Eve in the Garden of Eden—nude and pictured with the forbidden fruit, no less—makes a bold, if blasphemous, decorative statement from its station above the fireplace.
    0 Commentaires 0 Parts 26 Vue
  • WWW.NINTENDOLIFE.COM
    Round Up: The First Impressions Of 'Drag x Drive' For Switch 2 Are In
    "A showcase for dual-mouse mode".One of the many games revealed alongside the Switch 2 was Drag x Drive. As Nintendo describes it, this is the "next-generation of 3-on-3 sports" utilising the new Mouse Mode feature on the Joy-Con 2.Multiple outlets have now had the chance to go 'hands on' with this upcoming release - so first up are our thoughts, including our impressions of the new control method:Read the full article on nintendolife.com
    0 Commentaires 0 Parts 27 Vue
  • TECHCRUNCH.COM
    Jim Zemlin on taking a ‘portfolio approach’ to Linux Foundation projects
    The Linux Foundation has become something of a misnomer through the years. It has extended far beyond its roots as the steward of the Linux kernel, emerging as a sprawling umbrella outfit for a thousand open source projects spanning cloud infrastructure, security, digital wallets, enterprise search, fintech, maps, and more. Last month, the OpenInfra Foundation — best known for OpenStack — became the latest addition to its stable, further cementing the Linux Foundation’s status as a “foundation of foundations.” The Linux Foundation emerged in 2007 from the amalgamation of two Linux-focused not-for-profits: the Open Source Development Labs (OSDL) and the Free Standards Group (FSG). With founding members such as IBM, Intel, and Oracle, the Foundation’s raison d’être was challenging the “closed” platforms of that time — which basically meant doubling down on Linux in response to Windows’ domination. “Computing is entering a world dominated by two platforms: Linux and Windows,” the Linux Foundation’s executive director, Jim Zemlin (pictured above), said at the time. “While being managed under one roof has given Windows some consistency, Linux offers freedom of choice, customization and flexibility without forcing customers into vendor lock-in.” A “portfolio approach” Zemlin has led the charge at the Linux Foundation for some two decades, overseeing its transition through technological waves such as mobile, cloud, and — more recently — artificial intelligence. Its evolution from Linux-centricity to covering just about every technological nook is reflective of how technology itself doesn’t stand still — it evolves and, more importantly, it intersects. “Technology goes up and down — we’re not using iPods or floppy disks anymore,” Zemlin explained to TechCrunch in an interview during KubeCon in London last week. “What I realized early on was that if the Linux Foundation were to become an enduring body for collective software development, we needed to be able to bet on many different forms of technology.” This is what Zemlin refers to as a “portfolio approach,” similar to how a company diversifies so it’s not dependent on the success of a single product. Combining multiple critical projects under a single organization enables the Foundation to benefit from vertical-specific expertise in networking or automotive-grade Linux, for example, while tapping broader expertise in copyright, patents, data privacy, cybersecurity, marketing, and event organization. Being able to pool such resources across projects is more important than ever, as businesses contend with a growing array of regulations such as the EU AI Act and Cyber Resilience Act. Rather than each individual project having to fight the good fight alone, they have the support of a corporate-like foundation backed by some of the world’s biggest companies.  “At the Linux Foundation, we have specialists who work in vertical industry efforts, but they’re not lawyers or copyright experts or patent experts. They’re also not experts in running large-scale events, or in developer training,” Zemlin said. “And so that’s why the collective investment is important. We can create technology in an agile way through technical leadership at the project level, but then across all the projects have a set of tools that create long-term sustainability for all of them collectively.” The coming together of the Linux Foundation and OpenInfra Foundation last month underscored this very point. OpenStack, for the uninitiated, is an open source, open standards-based cloud computing platform that emerged from a joint project between Rackspace and NASA in 2010. It transitioned to an eponymous foundation in 2012, before rebranding as the OpenInfra Foundation after outgrowing its initial focus on OpenStack. Zemlin had known Jonathan Bryce, OpenInfra Foundation CEO and one of the original OpenStack creators, for years. The two foundations had already collaborated on shared initiatives, such as the Open Infrastructure Blueprint whitepaper. “We realized that together we could deal with some of the challenges that we’re seeing now around regulatory compliance, cybersecurity risk, legal challenges around open source — because it [open source] has become so pervasive,” Zemlin said. For the Linux Foundation, the merger also brought an experienced technical lead into the fold, someone who had worked in industry and built a product used by some of the world’s biggest organizations. “It is very hard to hire people to lead technical collaboration efforts, who have technical knowledge and understanding, who understand how to grow an ecosystem, who know how to run a business, and possess a level of humility that allows them to manage a super broad base of people without inserting their own ego in,” Zemlin said. “That ability to lead through influence — there’s not a lot of people who have that skill.” This portfolio approach extends beyond individual projects and foundations, and into a growing array of stand-alone regional entities. The most recent offshoot was LF India, which launched just a few months ago, but the Linux Foundation introduced a Japanese entity some years ago, while in 2022 it launched a European branch to support a growing regulatory and digital sovereignty agenda across the bloc. The Linux Foundation Europe, which houses a handful of projects such as The Open Wallet Foundation, allows European members to collaborate with one another in isolation, while also gaining reciprocal membership for the broader Linux Foundation global outfit. “There are times where, in the name of digital sovereignty, people want to collaborate with other EU organizations, or a government wants to sponsor or endow a particular effort, and you need to have only EU organizations participate in that,” Zemlin said. “This [Linux Foundation Europe] allows us to thread the needle on two things — they can work locally and have digital sovereignty, but they’re not throwing out the global participation that makes open source so good.” The open source AI factor While AI is inarguably a major step-change both for the technology realm and society, it has also pushed the concept of “open source” into the mainstream arena in ways that traditional software hasn’t — with controversy in hot pursuit. Meta, for instance, has positioned its Llama brand of AI models as open source, even though they decidedly are not by most estimations. This has also highlighted some of the challenges of creating a definition of open source AI that everyone is happy with, and we’re now seeing AI models with a spectrum of “openness” in terms of access to code, datasets, and commercial restrictions. The Linux Foundation, already home to the LF AI & Data Foundation, which houses some 75 projects, last year published the Model Openness Framework (MOF), designed to bring a more nuanced approach to the definition of open source AI. The Open Source Initiative (OSI), stewards of the “open source definition,” used this framework in its own open source AI definition. “Most models lack the necessary components for full understanding, auditing, and reproducibility, and some model producers use restrictive licenses whilst claiming that their models are ‘open source,’” the MOF paper authors wrote at the time. And so the MOF serves a three-tiered classification system that rates models on their “completeness and openness,” with regards to code, data, model parameters, and documentation. Model Openness Framework classificationsImage Credits:Linux Foundation (opens in a new window) It’s basically a handy way to establish how “open” a model really is by assessing which components are public, and under what licenses. Just because a model isn’t strictly “open source” by one definition doesn’t mean that it isn’t open enough to help develop safety tools that reduce hallucinations, for example — and Zemlin says it’s important to address these distinctions. “I talk to a lot of people in the AI community, and it’s a much broader set of technology practitioners [compared to traditional software engineering],” Zemlin said. “What they tell me is that they understand the importance of open source meaning ‘something’ and the importance of open source as a definition. Where they get frustrated is being a little too pedantic at every layer. What they want is predictability and transparency and understanding of what they’re actually getting and using.” Chinese AI darling DeepSeek has also played a big part in the open source AI conversation, emerging with performant, efficient open source models that upended how the incumbent proprietary players such as OpenAI plan to release their own models in the future. But all this, according to Zemlin, is just another “moment” for open source. “I think it’s good that people recognize just how valuable open source is in developing any modern technology,” he said. “But open source has these moments — Linux was a moment for open source, where the open source community could produce a better operating system for cloud computing and enterprise computing and telecommunications than the biggest proprietary software company in the world. AI is having that moment right now, and DeepSeek is a big part of that.” VC in reverse A quick peek across the Linux Foundation’s array of projects reveals two broad categories: those it has acquired, as with the OpenInfra Foundation, and those it has created from within, as it has done with the likes of the Open Source Security Foundation (OpenSSF). While acquiring an existing project or foundation might be easier, starting a new project from scratch is arguably more important, as it’s striving to fulfill a need that is at least partially unmet. And this is where Zemlin says there is an “art and science” to succeeding. “The science is that you have to create value for the developers in these communities that are creating the artifact, the open source code that everybody wants — that’s where all the value comes from,” Zemlin said. “The art is trying to figure out where there’s a new opportunity for open source to have a big impact on an industry.” This is why Zemlin refers to what the Linux Foundation is doing as something akin to a “reverse venture capitalist” approach. A VC looks for product-market fit, and entrepreneurs they want to work with — all in the name of making money. “Instead, we look for ‘project-market’ fit — is this technology going to have a big impact on a specific industry? Can we bring the right team of developers and leaders together to make it happen? Is that market big enough? Is the technology impactful?” Zemlin said. “But instead of making a ton of money like a VC, we give it all away.” But however its vast array of projects came to fruition, there’s no ignoring the elephant in the room: The Linux Foundation is no longer all about Linux, and it hasn’t been for a long time. So should we ever expect a rebrand into something a little more prosaic, but encompassing — like the Open Technology Foundation? Don’t hold your breath. “When I wear Linux Foundation swag into a coffee shop, somebody will often say, ‘I love Linux’ or ‘I used Linux in college,’” Zemlin said. “It’s a powerful household brand, and it’s pretty hard to move away from that. Linux itself is such a positive idea, it’s so emblematic of truly impactful and successful ‘open source.’”
    0 Commentaires 0 Parts 25 Vue
  • ARCHEYES.COM
    Ca’ delle Alzaie by Stefano Boeri Architetti: A Green Residential Complex in Treviso
    Ca’ delle Alzaie | © Andrea Sottana In Treviso, Italy, Ca’ delle Alzaie by Stefano Boeri Architetti proposes a nuanced model for urban residential living that interrogates the boundaries between architecture, landscape, and public infrastructure. Built between 2016 and 2021 on a former industrial plot along the river Sile, the project consists of three mid-rise residential buildings immersed in vegetation. The scheme addresses a complex array of urban and ecological concerns through an architectural language rooted in both environmental responsibility and spatial fluidity. Ca’ delle Alzaie Residential Complex Technical Information Architects1-4: Stefano Boeri Architetti Location: Treviso, Italy Gross Floor Area: 9,000 m2 | 96,875 Sq. Ft. Project Year: 2016 – 2021 Photographs: © Andrea Sottana In Treviso we proposed an unprecedented variation of our concept of Vertical Forest: not a tower, but three buildings surrounded by vegetation, with very different views of the surrounding landscape. In the next few years Treviso, with Milan, Utrecht, Brussels, Eindhoven, Munich, Cairo, Nanjing and many other cities in the world, will become the site of an innovative experiment to demonstrate that the architecture of the future will be able to host the cohabitation of more living species, becoming a fulcrum of biodiversity – as well as environmental sustainability. – Stefano Boeri Architects Ca’ delle Alzaie Residential Complex Photographs Aerial View | © Andrea Sottana Night View | © Andrea Sottana Facade | © Andrea Sottana Facade | © Andrea Sottana Facade | © Andrea Sottana Facade Corner | © Andrea Sottana Balcony | © Andrea Sottana Facade | © Andrea Sottana Stairs | © Andrea Sottana Contextual Framework and Urban Reconnection The site of Ca’ delle Alzaie occupies a residual industrial zone measuring approximately 11,000 square meters, positioned just outside Treviso’s historic core. Rather than simply introducing a residential development, the architects responded to a broader urban opportunity—the reweaving of a fractured edge condition between the city and its riverine landscape. At the heart of this reconnection lies the Restera pedestrian and cycling path, a linear public space parallel to the river Sile, which is reactivated and expanded through the project’s southern boundary. Pedestrian routes and embankments were not treated as peripheral or secondary but were designed as integral components of the project’s urban structure. The northern and southern slopes of the site are shaped into green infrastructure: flower meadows, tree-lined paths, and tiered gardens work collectively to soften the transition from public space to private domain. The southern retaining wall steps back in multiple locations to create moments of civic generosity—welcoming seating, bike stalls, and outdoor fitness elements that return spatial value to the community. This duality between public activation and private living is where the project situates its architectural relevance—not as an isolated object but as a porous system that negotiates the city’s thresholds. Ca’ delle Alzaie Spatial Organization Formally, the architecture avoids the temptation of monolithic repetition. The three residential buildings, while similar in height—each reaching seven storeys (27 meters)—are deliberately offset and rotated. This gesture disrupts the expectation of a linear riverfront wall. Instead, it produces a staggered composition that opens up framed views of the river while preserving visual permeability across the site. Each residential unit benefits from these spatial dynamics. Internally, the plan distinguishes between south-facing living areas and north-facing sleeping zones, ensuring natural light and river views to the spaces most frequently occupied during daylight hours. This sectional logic is mirrored in the facades, where orientation and program define the elevation strategy. With approximately three apartments per floor and a total of 60 units, the density remains moderate. Yet the spatial configuration offers a richness often absent in typical mid-rise housing. Large apertures, deep terraces, and the calibrated orientation of each building volume yield a diversity of micro-environments across units, resisting a one-size-fits-all typology. Beneath the buildings, an underground garage spans the full footprint of the complex. Rather than relegating this element to an infrastructural afterthought, the garage roof is transformed into a continuous green carpet. This elevated landscape supports communal gardens, vegetable plots, and the private gardens of ground-floor units. Here, program, structure, and landscape converge into a hybridized condition. Material Ecology and Vegetation as Architecture The architectural ambition of Ca’ delle Alzaie finds its most distinctive expression in its treatment of vegetation. Rather than serving as decorative greenwashing, plant life becomes both spatial material and ecological infrastructure. Over 50% of the total project surface—approximately 2 hectares—is dedicated to greenery. This includes 400 low-trunk trees, 170 full-scale trees, and 120 integrated directly onto building facades. This vertical forest strategy, developed in collaboration with agronomist Laura Gatti, draws from the ecological logic of the surrounding Sile Park. The palette of native plant species anchors the project within its environmental context and positions the building as an active contributor to local biodiversity. The architecture thus performs as a micro-ecosystem—a constructed habitat embedded within a larger urban-natural continuum. The facades, in particular, operate as living surfaces. The south-facing riverfront elevation is articulated through generous terraces, each three meters deep, designed to accommodate tree growth over time. Vertical planters punctuate the rhythmic horizontality, creating a facade of alternating solid and porous vegetated bands. The north facade is more restrained, projecting linear containers and vertical elements that house shrubs and smaller trees. The differentiation in planting strategy between the two orientations reflects both solar exposure and interior programmatic zoning. In this way, vegetation is not merely applied to the building; it is embedded in its architectural DNA—shaping its massing, regulating microclimates, and redefining the experience of inhabitation. Environmental Performance and Material Strategies In parallel with its biophilic agenda, Ca’ delle Alzaie engages with various sustainable design principles. Passive strategies such as solar orientation, natural cross-ventilation, and the strategic placement of vegetation collectively enhance thermal performance and acoustic insulation. Using trees and green surfaces contributes to mitigating urban heat island effects and airborne particulate matter while offering residents privacy and psychological comfort. Material selection further underscores the project’s ecological ethos. Anti-pollution paints and finishes were employed alongside renewable energy systems and durable materials selected for longevity and low environmental impact. Notably, these choices are not showcased as technological spectacle but are embedded quietly within the architecture’s systemic performance. The embankment—engineered as a flower meadow—functions as both structural infrastructure and green topography. It simultaneously conceals the subterranean garage, provides habitat, mediates between public and private realms, and creates continuity with the river’s edge. Thus, the landscape is not simply adjacent to the building; it is an active architectural medium. Ca’ delle Alzaie Residential Complex Image Gallery About Stefano Boeri Architetti ​Founded in 1993 by Italian architect Stefano Boeri, Stefano Boeri Architetti is an international architectural practice based in Milan, with additional offices in Shanghai and Tirana. The firm specializes in sustainable architecture, urban planning, and strategic urban development, focusing on integrating living nature into architectural design. Their notable projects include the Bosco Verticale (Vertical Forest) in Milan, a pioneering example of residential towers incorporating extensive vegetation to promote urban biodiversity. The firm’s work emphasizes environmental sustainability and has received numerous international accolades for its innovative approach to blending architecture and ecology. ​ Credits and Additional Notes Client: Cazzaro Costruzioni S.r.l. Landscape and Vegetation Consultant: Laura Gatti (Agronomist) Site Area: 11,000 sqm (approx.) Plot Area: 10,750 sqm
    0 Commentaires 0 Parts 27 Vue
  • BUILDINGSOFNEWENGLAND.COM
    William Moore House // 1803-2019
    Formerly located at the intersection of two historic turnpikes in Canterbury, Connecticut, the William Moore House was a historic and architecturally significant residence that stood over 200 years until its demolition in 2019. The large, Federal style house was built in 1803 for William Moore, a merchant who operated a store and also served as the town postmaster. The upper floor of this house at one time accommodated a ballroom where the local Masonic organization met. Later in the 19th century, the house became the home of prominent merchant, banker, and politician Marvin H. Sanger, Connecticut Secretary of State from 1873 to 1876. In 1921, it was the home of Lillian Frink when she became one of the first women ever elected to the Connecticut General Assembly, along with four other women elected that same year. The house with its projecting center pedimented bay, elaborate corner pilasters on pedestals, and elegant Palladian window represented the height of country Federal-period architecture until destroyed by a fire in 2018, leading the town to raze the building a year later in 2019. The lot remains vacant as of 2025.
    0 Commentaires 0 Parts 28 Vue
  • WWW.FOXNEWS.COM
    5 mobile privacy terms you need to know to protect yourself
    close Expert recommends smartphone alternatives for kids and teens this holiday season Ethics and Public Policy fellow Clare Morell explains how parents can choose smartphone alternatives for their children this holiday season on ‘Fox & Friends Weekend.’ Your smartphone might be your closest companion, tracking your steps, saving your passwords and remembering your favorite takeout. But how much do you know about how it protects (or exposes) your privacy?We’re breaking down five key mobile privacy terms that could make all the difference when it comes to keeping your personal info safe. Whether you’re team iPhone or Android, understanding these concepts can help you take control of your digital footprint — right from the palm of your hand.Stay tuned for more in this series as we dive deeper into privacy-related tech terms and other essential concepts, answering the top questions we get from readers like you. STAY PROTECTED & INFORMED! GET SECURITY ALERTS & EXPERT TECH TIPS — SIGN UP FOR KURT’S THE CYBERGUY REPORT NOW A woman working on her laptop  (Kurt "CyberGuy" Knutsson)1. Location trackingYour phone’s GPS isn’t just for directionsEvery time you check the weather, tag your location on Instagram or ask Google Maps for the quickest route, you’re sharing your whereabouts. That’s thanks to Location Tracking, a feature built into most apps and devices that uses GPS, Wi-Fi, Bluetooth or cell towers to pinpoint your location.Here’s the catch: Many apps track you even when you’re not using them. Some use this data to serve local content or ads, while others collect and sell it to third parties.How to protect yourself:Check which apps have location access in your settingsSwitch from "Always" to "While Using the App"Consider turning off location services entirely when you don’t need themKnowing when and how you’re being tracked is the first step to stopping it.More: Top 20 apps tracking you every day2. App permissionsWhat your apps know about you (and maybe shouldn't)Before you can use that new photo editor or budgeting tool, it probably asked for a few things — access to your camera, contacts, microphone, maybe even your calendar. These are called App Permissions, and they determine what parts of your phone an app can interact with.WHAT IS ARTIFICIAL INTELLIGENCE (AI)?While some requests are necessary (e.g., a video app needs camera access), others can be excessive or even suspicious. For example, why does a flashlight app need your location or call logs?Tips for staying in control:Review permissions when installing appsRegularly audit your app settingsDelete apps you no longer useYour data shouldn’t be the price of convenience. Set boundaries.More: Did you say ‘yes’ to allowing apps permission to your Google account? A woman looking at her phone while working on her laptop  (Kurt "CyberGuy" Knutsson)3. Two-factor authentication (2FA)A second lock on your digital front doorPasswords aren’t perfect. That’s where two-factor authentication (2FA) comes in. It adds an extra layer of protection by requiring two forms of identification before granting access to your account, typically something you know (a password) and something you have (a text code or authentication app).Many major apps and platforms now support 2FA, and enabling it can help block hackers, even if they steal your password.GET FOX BUSINESS ON THE GO BY CLICKING HEREMost common types of 2FA:Text or email codesAuthenticator apps like Google Authenticator or AuthyBiometric verification (fingerprint or face ID)Activate 2FA where you can. It's one of the simplest ways to level up your mobile security.4. Mobile ad IDThe invisible label that tracks your habitsBehind the scenes, your phone is assigned a unique string of numbers and letters called a mobile advertising identifier (Mobile Ad ID). It helps advertisers track your behavior across apps and websites to build a profile of your interests.While it doesn’t include your name, it can be linked to your device and used to serve targeted ads. Think of it as a digital name tag for marketing purposes.Want to opt out? You can:iPhone: Go to Settings > Privacy & Security > TrackingAndroid: Settings > Privacy > Ads, and reset or delete your Ad IDYou’re not obligated to let your phone advertise you.More: How to escape Facebook’s creepy ad tracking A laptop and external storage devices on a desk (Kurt "CyberGuy" Knutsson)5. VPN (virtual private network)Your personal privacy tunnelA VPN is like a secret tunnel for your internet connection. It hides your online activities and helps keep your personal information safe when you’re using the internet, especially on public Wi-Fi.In addition to protecting your information from prying eyes, a VPN also hides your real location. It allows you to connect to the internet through a server in another part of the world, which makes it appear as though you’re browsing from a different location. This is useful for security (protecting your privacy) and for accessing content that might be restricted in certain areas.When you connect to public Wi-Fi at a coffee shop or airport, your data can be exposed to hackers and snoops. That’s where a VPN (virtual private network) comes in. It encrypts your internet traffic and routes it through a secure server, hiding your IP address and protecting your activity.Think of it as a private tunnel for your internet usage, shielding your data from prying eyes.What VPNs are great for:Protecting your connection on public Wi-FiAccessing region-locked contentHiding your online activity from advertisers or your internet providerJust make sure to choose a trustworthy VPN. Some free VPNs may log your data or slow your phone down. A reliable VPN is essential for protecting your online privacy and ensuring a secure, high-speed connection. Kurt’s key takeawaysYour mobile device is powerful, but so are the privacy risks associated with it. By understanding these five mobile privacy terms, you can take simple yet impactful steps to protect your digital life. From turning off unnecessary app permissions to enabling two-factor authentication, these small tweaks can help you stay in control and keep your information safe.CLICK HERE TO GET THE FOX NEWS APPConfused by a mobile tech term or want help tightening up your privacy settings? Drop your questions below. We’ve got more tips coming your way. Let us know by writing us atCyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Commentaires 0 Parts 27 Vue
  • WWW.ZDNET.COM
    This is the most customizable smart home accessory I didn't know I needed
    Govee's new Neon Rope Light 2 makes it easy to decorate your home and has quickly become a staple in my household.
    0 Commentaires 0 Parts 27 Vue
  • WWW.FORBES.COM
    Renewing The Mandate To Safeguard The Energy Grid
    Abstract network wave and a glowing blue and orange particle data on dark background.getty Protection of the energy grid, or "the Grid," will be an issue of significant interest during the second Trump Administration. According to President Trump, he intends to prioritize national security concerns while investing in modernizing and enhancing America’s energy infrastructure. He also understood the significance of safeguarding vital infrastructure against electromagnetic pulses (EMPs) and other dangers during his first term in office. Three main regions constitute the U.S. energy grid: the Texas Interconnected System, the Western Interconnection, which spans the Pacific Ocean to the Rocky Mountain states, and the Eastern Interconnection, which serves states east of the mountains. The Grid, an essential piece of infrastructure, consists of a network of over 7,000 power plants connected by hundreds of thousands of miles of high-voltage transmission lines. According to estimates, there are thousands of power-generating units and 70,000 transformer power substations. Even with the addition of automation and emerging technologies recently, the grid still relies heavily on older tech. 60% of circuit breakers are older than 30 years, while 70% of transmission lines are at least 30 years old, meaning they are nearing the end of their useful lifespans. As a result of the aging infrastructure and rising power consumption, the Grid is now more vulnerable to cascading failures, in which the failure of one component triggers a chain reaction of failures. The growth and expansion of data centers in itself is straining the Grid. Research by the Lawrence Berkeley National Laboratory for the Department of Energy shows that data centers’ power consumption has tripled in the last ten years and may triple again by 2028. John Moura, Director of Reliability Assessment and System Analysis for the North American Electricity Reliability Corporation (NERC) told Reuters that the grid is not built to handle the loss of 1,500-megawatt data centers as they get larger and use more electricity. "Unless we add additional grid resources, it will eventually grow too big to handle." The fundamental truth is that the infrastructure of the US power grid is too outdated to handle the new era of data and growing computational needs. It is also highly susceptible to cyberattacks, EMP, natural disasters, and physical threats, all of which could have disastrous results. The reality of the ecosystem is that the Grid is essential for medical care, food and agriculture, water, data centers, telecommunications, stock exchanges, satellite ground systems, and other important infrastructure. RISKS TO THE GRIDhigh voltage post.High-voltage tower sky background.getty The power grid faces a wide range of risks. EMPs from geomagnetic solar flares, short-range missiles fired by terrorists or nation-states, cyberattacks, or physical attacks on utilities or power facilities are all examples of the risk landscape.Solar flares, which originate from storms on the Sun, constitute a persistent menace. Earth is believed to have experienced more than 100 solar storms in the last 150 years. Strong flares release particles of electromagnetic radiation that are aimed toward Earth and other planets in the solar system. The size of the flare, the scale of the coronal mass ejection, and the speed at which it moves from the Sun to Earth all affect how severe a solar storm is. The electrical grid can sustain serious damage from a type of flare known as an X-class flare. It is impossible to overlook the risk. An EMP attack could also be directed. An EMP strike could be carried out by a terrorist organization or rogue state that detonated a nuclear bomb far above the atmosphere, destroying electronics and the electrical grid. Former CIA Director James Woolsey testified before a House committee and said that if the US received an EMP attack, “two-thirds of the US population would likely perish from starvation, disease, and societal breakdown.” "Natural EMP from a geomagnetic superstorm, like the 1859 Carrington Event or 1921 Railroad Storm, and nuclear EMP attack from terrorists or rogue states, as practiced by North Korea during the nuclear crisis of 2013, are both existential threats that could kill 9 of 10 Americans through starvation, disease, and societal collapse," said the late Dr. Peter Pry, executive director of the Task Force on National and Homeland Security and a member of the Congressional EMP Commission. The Department of Homeland Security (DHS), which acknowledges that hackers have targeted US public utilities control systems, partially protects The Grid. Many of the Supervisory Control and Data Acquisition (SCADA) networks used by power companies to manage their industrial systems need to be updated and strengthened to withstand the increasing dangers of cybersecurity. The Russian cyberattack against Ukraine's power grid, which left 700,000 people without power, served as a reminder of the vulnerabilities in the electric grid. Countries need to step up their efforts to prevent cyberattacks on nuclear and other energy systems, according to the World Energy Council. They observe that the frequency, complexity, and expenses of data breaches are rising. The whole U.S. power system and other vital infrastructure might be taken down by a cyberattack launched by multiple nations, according to retired Admiral Mike Rodgers, a former leader of the National Security Agency (NSA) and U.S. Cyber Command. A successful ransomware attack on the Colonial Pipeline in 2021 offered insight into that vulnerability and the numerous attack points. In addition to disrupting the oil supply of the US East Coast, the attackers showed that there was no cybersecurity structure in place for event response and preparation. The majority of the vital infrastructure components of the U.S. energy grid now function in an internet-accessible digital environment. The trends of hardware and software integration, along with the expansion of networked sensors, are redefining hackers’ opportunities for surface attacks. Both industry and the government have identified the vulnerabilities for cyberattacks. The U.S. energy grid is susceptible to cyberattacks, according to the General Accounting Office (GAO). The grid distribution systems, which transport power from transmission systems to customers, have become increasingly vulnerable, according to the GAO, partly because of the growing capabilities of technologies that enable remote access and linkages to business networks. Threat actors might be able to access those systems as a result, thereby interfering with operations.The reality is that artificial intelligence tools are enabling increasingly sophisticated cyberattacks. Criminal groups, state actors, and other entities are also targeting energy-critical infrastructure. The use of operational technology and the industrial internet of things has increased the attack surface. To combat cyber risks, energy infrastructure operators should use "security by design." Building agile systems with operational cyber-fusion is required by design for cybersecurity to monitor, identify, and react to new threats. Ultimately, we need to enhance the cybersecurity of the U.S. energy grid. Another worry is the physical threat posed to the Grid by malevolent acts, particularly by terrorists. A bomb and an incendiary device placed atop a 50,000-gallon fuel tank were used to target and assault a power facility in Nogales, Arizona, a decade ago. Fortunately, the endeavor was unsuccessful. Recently, other terrorist acts by extremist groups have targeted utilities with gunfire and bomb threats. Strategies To Help Protect the GridWire mesh security shield showing up from a huge wave pattern network.getty There are various ways to lessen threats to the energy infrastructure from physical, existential, and cyber sources. These include spreading out energy sources and using smaller, independent networks; systems to stabilize voltage and devices to manage energy flow; setting better security rules, training, and emergency plans; protecting the grid from power surges and voltage issues; and creating ways to share information about weaknesses and threats. To restore power for various emergencies, systematic resilience planning is also essential. For instance, we should upgrade and replace outdated infrastructure with cutting-edge technology like automation systems, smart meters, and sensors to improve grid efficiency and dependability. Additionally, we should set up independent microgrids on a smaller scale, which can function independently or in tandem with the main grid, to supply localized electricity during emergencies or outages. Cyberattacks are becoming increasingly sophisticated thanks to artificial intelligence tools. Additionally, governmental actors, criminal gangs, and other assailants are targeting vital infrastructure related to energy. Operators of energy infrastructure should use "security by design" to combat cyber threats since connectivity brought about by the introduction of operational technology and the industrial internet of things has further increased the attack surface. For cybersecurity to be able to monitor, identify, and react to new threats, it is necessary to construct agile systems with operational cyber-fusion. Ultimately, we need to significantly improve the cybersecurity of the U.S. energy grid to withstand growing threats.Illustration of a coronal mass ejection impacting the Earth s atmosphere. These events, CMEs for ... More short, are powerful releases of solar charged particles (plasma) and magnetic field, travelling on the solar wind. When a CME hits Earth, it can cause a geomagnetic storm which disrupts the planet s magnetosphere, our radio transmissions and electrical power lines. They can damage artificial satellites and cause long-lasting power outages. Humans in orbit are also very vulnerable to these events, whose high-energy particles are not shield by typical spacecraft.getty The threat of an EMP is existential and will require more planning and resilience. President Trump signed Executive Order (E.O.) 13865, “Coordinating National Resilience to Electromagnetic Pulses,” on March 26, 2019, making it a national priority program to set resilience and security standards for vital infrastructure in the United States. E.O. 13865 states, “An electromagnetic pulse (EMP) has the potential to disrupt, degrade, and damage technology and critical infrastructure systems. Human-made or naturally occurring EMPs can affect large geographic areas, disrupting elements critical to the nation’s security and economic prosperity, and could adversely affect global commerce and stability. The federal government must foster sustainable, efficient, and cost-effective approaches to improving the nation’s resilience to the effects of EMPs.” Electromagnetic Pulse (EMP) Programs Status Report In an article, “Cost Analysis: Protecting The Grid and Electronics from EMP,” the authors proposed that a National Resilience Task Force, supported by the U.S. Department of Defense (Northern Command and the National Guard), the Department of Homeland Security, and the Department of Energy, could undertake a mitigation strategy to protect the U.S. critical infrastructure from the effects of an EMP. This effort could include the following actions: Protect electronic equipment by enclosing sensitive electronics in grounded conductive housings and by adding EMP surge arresters to generators, transformers, motors, and critical electronic equipment. Install neutral ground blockers on transformers in substations to prevent ground-induced currents from entering the transformers. Install EMP-protected microgrids with on-site power generation at critical infrastructure facilities. Develop EMP-resistant electronics, such as optical computing and carbon nanotube memory, that are less susceptible to an EMP attack. Include EMP-attack planning scenarios in emergency preparedness training. By planning for the consequences of an EMP attack, communities could develop plans and measures to maintain essential services. Source: Cost Analysis: Protecting the Grid and Electronics from an EMP - Domestic Preparedness To restore power for various emergencies, comprehensive resilience planning is essential. Current technologies can protect the Grid, requiring only leadership and investment to reduce vulnerabilities. The leadership for resolving the electric grid problem is going to come from the incoming administration. Since the majority of the country’s vital infrastructure—such as the banking, healthcare, transportation, and communications systems—is owned by the private sector, it is reliant on the Grid. Co-investment, solid public-private sector partnerships, and cooperation in research, development, and prototyping will all be necessary to find answers.Such collaboration must involve a faster effort to finance and develop innovative technologies that can shield utilities from man-made or natural electromagnetic surges, further secure SCADA network hardware and software from cyberattacks, and improve the Grid's physical security. The investment in preserving civilization is worth it, even though estimates of costs vary. As more people become conscious of the precarious threat landscape and the consequences of inaction, there has been an increasing need for safeguarding The Grid. This heightened consciousness implies a need to act quickly and a mandate to support the incoming administration.
    0 Commentaires 0 Parts 28 Vue