• Why an Xbox Video Game Franchise Is a Partner in a Major Exhibit at The Louvre Museum

    While it’s now accepted by many that video games are an art form, it still might be hard to believe that one is featured in an exhibit at the same museum that’s home to Leonardo da Vinci’s “Mona Lisa”: The Louvre in Paris.

    But this week, Xbox and World’s Edge Studio announced a partnership with what is arguably the most prestigious museum in the world for its new exhibition, “Mamluks 1250–1517.”

    Related Stories

    For those who are unaware of how the gaming studios connect to this aspect of the Egyptian Syrian empire: The Mamluks cavalry are among the many units featured in Xbox and World’s Edge Studio’s “Age of Empires” video game franchise. The cavalry is a fan favorite choice in the game centered around traversing the ages and competing against rival empires, particularly in “Age of Empires II: Definitive Edition.”

    Popular on Variety

    Presented at the Louvre until July 28, the exhibit “Mamluks 1250–1517″ recounts “the glorious and unique history of this Egyptian Syrian empire, which represents a golden age for the Near East during the Islamic era,” per its official description. “Bringing together 260 pieces from international collections, the exhibition explores the richness of this singular and lesser-known society through a spectacular and immersive scenography.”

    This marks the first time a video game franchise has collaborated with the Louvre Museum, with installations and events that occur both in person at the museum and online through the “Age of Empires” game:

    Official “Louvre Museum” scenario in Age of Empires II: Definitive Edition
    Players can embody General Baybars and Sultan Qutuz at the really heart of the Ain Jalut battle, which opposed the Mamluk Sultanate to the Mongol Empire. This scenario, speciallycreated for the occasion, is already available in Age of Empires II: Definitive Edition.Exclusive Gaming Night on Twitch Live from the Louvre
    On Thursday, June 12, at 8 PM, streamer and journalist Samuel Etiennewill replay live from the exhibition “Mamluks 1250-1517” at the Louvre the official“Louvre Museum” scenario to relive the famous Battle of Ain Jalut on the game Age of EmpiresII: Definitive Edition, in the presence of Le Louvre Teams and one of the studio’s developers.This is an opportunity to learn more about the history of the Mamluks and their representationin the various episodes of the saga.Cross-Interview: The Louvre x Age of Empires
    To discover more, an interview featuring Adam Isgreen, creative director at World’s Edge, thestudio behind the franchise, and Souraya Noujaïm and Carine Juvin, curators of the exhibition,is available on the YouTube channels of the Louvre and Age of Empires.Mediation and Gaming Sessions at the Museum
    Museum visitors at the Louvre are invited to test the scenario of the Battle of Ain Jalut,specially designed for the Mamluk exhibition, in the presence of a Louvre mediator and anXbox representative during an exceptional series of workshops. The sessions will take place onFridays, June 20, 27, and 4 & 11 of July. All information and registrations are available here:www.louvre.fr

    “World’s Edge is honoured to collaborate with Le Louvre,” head of World’s Edge studio Michael Mann said. “The ‘Age of Empires’ franchise has been bringing history to life for more than 65 million players around the world for almost 30 years. We’ve always believed in the great potential for our games to spark an interest in history and culture. We often hear of teachers using ‘Age of Empires’ to teach history to their students and stories from our players about how ‘Age of Empires’ has driven them to learn more, or even to pursue history academically or as a career. This opportunity to bring the amazing stories of the Mamluks to new audiences through the Louvre’s exhibition is one we’re excited to be a part of. We hope that through the excellent work of the Louvre’s team, the legacy of the Mamluks can be shared around the world, and that people enjoy their stories as they come to life through ‘Age of Empires.'”

    “We are delighted to welcome ‘Age of Empires’ as part of the exhibition Mamluks 1250–1517, through a unique partnership that blends the pleasures of gaming with learning and discovery,” Souraya Noujaim, director of the Department of Islamic Arts and chief curator of the exhibition at le Louvre Museum, said. “It is a way for the museum to engage with diverse audiences and offer a new narrative, one that resonates with contemporary sensitivities, allowing for a deeper understanding of artworks and a greater openness to world history. Beyond the game, the museum experience becomes an opportunity to move from the virtual to the real and uncover the true history of the Mamluks and their unique contribution to universal heritage.”

    See video and images below from the “Age of Empires” in-game event and the in-person exhibit at the Louvre.
    #why #xbox #video #game #franchise
    Why an Xbox Video Game Franchise Is a Partner in a Major Exhibit at The Louvre Museum
    While it’s now accepted by many that video games are an art form, it still might be hard to believe that one is featured in an exhibit at the same museum that’s home to Leonardo da Vinci’s “Mona Lisa”: The Louvre in Paris. But this week, Xbox and World’s Edge Studio announced a partnership with what is arguably the most prestigious museum in the world for its new exhibition, “Mamluks 1250–1517.” Related Stories For those who are unaware of how the gaming studios connect to this aspect of the Egyptian Syrian empire: The Mamluks cavalry are among the many units featured in Xbox and World’s Edge Studio’s “Age of Empires” video game franchise. The cavalry is a fan favorite choice in the game centered around traversing the ages and competing against rival empires, particularly in “Age of Empires II: Definitive Edition.” Popular on Variety Presented at the Louvre until July 28, the exhibit “Mamluks 1250–1517″ recounts “the glorious and unique history of this Egyptian Syrian empire, which represents a golden age for the Near East during the Islamic era,” per its official description. “Bringing together 260 pieces from international collections, the exhibition explores the richness of this singular and lesser-known society through a spectacular and immersive scenography.” This marks the first time a video game franchise has collaborated with the Louvre Museum, with installations and events that occur both in person at the museum and online through the “Age of Empires” game: Official “Louvre Museum” scenario in Age of Empires II: Definitive Edition Players can embody General Baybars and Sultan Qutuz at the really heart of the Ain Jalut battle, which opposed the Mamluk Sultanate to the Mongol Empire. This scenario, speciallycreated for the occasion, is already available in Age of Empires II: Definitive Edition.Exclusive Gaming Night on Twitch Live from the Louvre On Thursday, June 12, at 8 PM, streamer and journalist Samuel Etiennewill replay live from the exhibition “Mamluks 1250-1517” at the Louvre the official“Louvre Museum” scenario to relive the famous Battle of Ain Jalut on the game Age of EmpiresII: Definitive Edition, in the presence of Le Louvre Teams and one of the studio’s developers.This is an opportunity to learn more about the history of the Mamluks and their representationin the various episodes of the saga.Cross-Interview: The Louvre x Age of Empires To discover more, an interview featuring Adam Isgreen, creative director at World’s Edge, thestudio behind the franchise, and Souraya Noujaïm and Carine Juvin, curators of the exhibition,is available on the YouTube channels of the Louvre and Age of Empires.Mediation and Gaming Sessions at the Museum Museum visitors at the Louvre are invited to test the scenario of the Battle of Ain Jalut,specially designed for the Mamluk exhibition, in the presence of a Louvre mediator and anXbox representative during an exceptional series of workshops. The sessions will take place onFridays, June 20, 27, and 4 & 11 of July. All information and registrations are available here:www.louvre.fr “World’s Edge is honoured to collaborate with Le Louvre,” head of World’s Edge studio Michael Mann said. “The ‘Age of Empires’ franchise has been bringing history to life for more than 65 million players around the world for almost 30 years. We’ve always believed in the great potential for our games to spark an interest in history and culture. We often hear of teachers using ‘Age of Empires’ to teach history to their students and stories from our players about how ‘Age of Empires’ has driven them to learn more, or even to pursue history academically or as a career. This opportunity to bring the amazing stories of the Mamluks to new audiences through the Louvre’s exhibition is one we’re excited to be a part of. We hope that through the excellent work of the Louvre’s team, the legacy of the Mamluks can be shared around the world, and that people enjoy their stories as they come to life through ‘Age of Empires.'” “We are delighted to welcome ‘Age of Empires’ as part of the exhibition Mamluks 1250–1517, through a unique partnership that blends the pleasures of gaming with learning and discovery,” Souraya Noujaim, director of the Department of Islamic Arts and chief curator of the exhibition at le Louvre Museum, said. “It is a way for the museum to engage with diverse audiences and offer a new narrative, one that resonates with contemporary sensitivities, allowing for a deeper understanding of artworks and a greater openness to world history. Beyond the game, the museum experience becomes an opportunity to move from the virtual to the real and uncover the true history of the Mamluks and their unique contribution to universal heritage.” See video and images below from the “Age of Empires” in-game event and the in-person exhibit at the Louvre. #why #xbox #video #game #franchise
    VARIETY.COM
    Why an Xbox Video Game Franchise Is a Partner in a Major Exhibit at The Louvre Museum
    While it’s now accepted by many that video games are an art form, it still might be hard to believe that one is featured in an exhibit at the same museum that’s home to Leonardo da Vinci’s “Mona Lisa”: The Louvre in Paris. But this week, Xbox and World’s Edge Studio announced a partnership with what is arguably the most prestigious museum in the world for its new exhibition, “Mamluks 1250–1517.” Related Stories For those who are unaware of how the gaming studios connect to this aspect of the Egyptian Syrian empire: The Mamluks cavalry are among the many units featured in Xbox and World’s Edge Studio’s “Age of Empires” video game franchise. The cavalry is a fan favorite choice in the game centered around traversing the ages and competing against rival empires, particularly in “Age of Empires II: Definitive Edition.” Popular on Variety Presented at the Louvre until July 28, the exhibit “Mamluks 1250–1517″ recounts “the glorious and unique history of this Egyptian Syrian empire, which represents a golden age for the Near East during the Islamic era,” per its official description. “Bringing together 260 pieces from international collections, the exhibition explores the richness of this singular and lesser-known society through a spectacular and immersive scenography.” This marks the first time a video game franchise has collaborated with the Louvre Museum, with installations and events that occur both in person at the museum and online through the “Age of Empires” game: Official “Louvre Museum” scenario in Age of Empires II: Definitive Edition Players can embody General Baybars and Sultan Qutuz at the really heart of the Ain Jalut battle(1260), which opposed the Mamluk Sultanate to the Mongol Empire. This scenario, speciallycreated for the occasion, is already available in Age of Empires II: Definitive Edition (see onhttp://www.ageofempire.com/lelouvre for instructions on finding the map in the game) [LiveTuesday 10th at 9am PT/6pm BST].Exclusive Gaming Night on Twitch Live from the Louvre On Thursday, June 12, at 8 PM, streamer and journalist Samuel Etienne (1.1M FrenchStreamer) will replay live from the exhibition “Mamluks 1250-1517” at the Louvre the official“Louvre Museum” scenario to relive the famous Battle of Ain Jalut on the game Age of EmpiresII: Definitive Edition, in the presence of Le Louvre Teams and one of the studio’s developers.This is an opportunity to learn more about the history of the Mamluks and their representationin the various episodes of the saga.Cross-Interview: The Louvre x Age of Empires To discover more, an interview featuring Adam Isgreen, creative director at World’s Edge, thestudio behind the franchise, and Souraya Noujaïm and Carine Juvin, curators of the exhibition,is available on the YouTube channels of the Louvre and Age of Empires.Mediation and Gaming Sessions at the Museum Museum visitors at the Louvre are invited to test the scenario of the Battle of Ain Jalut,specially designed for the Mamluk exhibition, in the presence of a Louvre mediator and anXbox representative during an exceptional series of workshops. The sessions will take place onFridays, June 20, 27, and 4 & 11 of July. All information and registrations are available here:www.louvre.fr “World’s Edge is honoured to collaborate with Le Louvre,” head of World’s Edge studio Michael Mann said. “The ‘Age of Empires’ franchise has been bringing history to life for more than 65 million players around the world for almost 30 years. We’ve always believed in the great potential for our games to spark an interest in history and culture. We often hear of teachers using ‘Age of Empires’ to teach history to their students and stories from our players about how ‘Age of Empires’ has driven them to learn more, or even to pursue history academically or as a career. This opportunity to bring the amazing stories of the Mamluks to new audiences through the Louvre’s exhibition is one we’re excited to be a part of. We hope that through the excellent work of the Louvre’s team, the legacy of the Mamluks can be shared around the world, and that people enjoy their stories as they come to life through ‘Age of Empires.'” “We are delighted to welcome ‘Age of Empires’ as part of the exhibition Mamluks 1250–1517, through a unique partnership that blends the pleasures of gaming with learning and discovery,” Souraya Noujaim, director of the Department of Islamic Arts and chief curator of the exhibition at le Louvre Museum, said. “It is a way for the museum to engage with diverse audiences and offer a new narrative, one that resonates with contemporary sensitivities, allowing for a deeper understanding of artworks and a greater openness to world history. Beyond the game, the museum experience becomes an opportunity to move from the virtual to the real and uncover the true history of the Mamluks and their unique contribution to universal heritage.” See video and images below from the “Age of Empires” in-game event and the in-person exhibit at the Louvre.
    0 Yorumlar 0 hisse senetleri
  • Exclusive: Herzog and de Meuron working on all-new rival Liverpool Street plans

    The Swiss architects first submitted controversial plans to overhaul the Grade II-listed terminus in the City of London in May 2023 on behalf of Sellar and Network Rail. Now the AJ understands the practice is drawing up a rival scheme, separate to its original proposal, which is effectively a third but as yet unseen design for the station and a development above it.
    ACME, on behalf of Network Rail, submitted its own proposals in April after Network Rail appointed the Shoreditch practice to draw up plans last year.
    This put the brakes on Herzog & de Meuron’s 2023 scheme, which had been updated with amendments in 2024 in response to criticism over heritage harm – though the application was never withdrawn and remains live on the City of London’s planning portal.Advertisement

    According to sources, Herzog & de Meuron – still with Sellar’s backing – is actively working on a fresh scheme with ‘much less demolition’, which could rival ACME’s plans as well as its own 2023 scheme.
    SAVE Britain’s Heritage, which the AJ understands is among several bodies to have been consulted on the ‘third’ scheme for Liverpool Street station, told the AJ: ‘There are now potentially three live schemes for the same site.
    ‘However, what is interesting about Sellar’s latest proposal is that it involves much less demolition of the station. Network Rail and their current favoured architect, ACME, would do well to take note.’
    Historic England, which strongly opposed the original Herzog & de Meuron scheme, is understood to have been shown the Swiss architects’ latest proposals in March. The heritage body’s official response to the ACME scheme has not yet been made public.
    A spokesperson for the government’s heritage watchdog told the AJ of the emerging third proposal: ‘We have seen a revised scheme designed by Herzog & de Meuron, but it has not been submitted as a formal proposal and we have not provided advice on it.’Advertisement

    The C20 Society added that it ‘can confirm that it has been in pre-app consultation with both ACME and Herzog & de Meuron regarding the various schemes in development for Liverpool Street Station’.
    The body, which campaigns to protect 20th century buildings, added: ‘We will provide a full statement once all plans have been scrutinised.’
    In November, the Tate Modern architects appeared to be off the job following the appointment by Network Rail of Shoreditch-based ACME, which came up with an alternative scheme featuring slightly smaller office towers as part of the planned above-station development.
    The ACME plan for Liverpool Street includes an above-station office development that would rise to 18 storeys, with balcony spaces on the 10th to 17th storeys and outdoor garden terraces from the 14th to 17th storeys. These proposals are marginally shorter than Herzog & de Meuron’s original 15 and 21-storey designs.
    However, despite these changes, ACME and Network Rail’s scheme has recently seen criticism by the Victorian Society, which told the AJ last month that it would object to the ACME scheme, claiming the above-station development ‘would be hugely damaging to Liverpool Street Station and the wider historic environment of the City of London’.
    In September last year, Sellar confirmed that that Herzog & de Meuron was working on an amended proposal, as the AJ revealed at the time. However, it is unclear if the latest, third option is related to that work.
    While both applications introduce more escalators down to platform level and accessibility improvements, the Herzog & de Meuron scheme proved controversial because of planned changes to the inside of the Grade II*-listed former Great Eastern Hotel building above the concourse, which would have seen the hotel relocate to new-build elements.
    The Swiss architect’s original proposals would have also removed much of the 1992 additions to the concourse by British Rail’s last chief architect, Nick Derbyshire – which had not been included in the original 1975 listing for Liverpool Street station. Historic England listed that part of the station in late 2022 after a first consultation on the Herzog & de Meuron plans.
    Network Rail told the AJ that it remains ‘fully committed’ to the ACME plan, which was validated only last month.
    Herzog & de Meuron referred the AJ to Sellar for comment.
    Sellar declined to comment.
    ACME has been approached for comment.
    #exclusive #herzog #meuron #working #allnew
    Exclusive: Herzog and de Meuron working on all-new rival Liverpool Street plans
    The Swiss architects first submitted controversial plans to overhaul the Grade II-listed terminus in the City of London in May 2023 on behalf of Sellar and Network Rail. Now the AJ understands the practice is drawing up a rival scheme, separate to its original proposal, which is effectively a third but as yet unseen design for the station and a development above it. ACME, on behalf of Network Rail, submitted its own proposals in April after Network Rail appointed the Shoreditch practice to draw up plans last year. This put the brakes on Herzog & de Meuron’s 2023 scheme, which had been updated with amendments in 2024 in response to criticism over heritage harm – though the application was never withdrawn and remains live on the City of London’s planning portal.Advertisement According to sources, Herzog & de Meuron – still with Sellar’s backing – is actively working on a fresh scheme with ‘much less demolition’, which could rival ACME’s plans as well as its own 2023 scheme. SAVE Britain’s Heritage, which the AJ understands is among several bodies to have been consulted on the ‘third’ scheme for Liverpool Street station, told the AJ: ‘There are now potentially three live schemes for the same site. ‘However, what is interesting about Sellar’s latest proposal is that it involves much less demolition of the station. Network Rail and their current favoured architect, ACME, would do well to take note.’ Historic England, which strongly opposed the original Herzog & de Meuron scheme, is understood to have been shown the Swiss architects’ latest proposals in March. The heritage body’s official response to the ACME scheme has not yet been made public. A spokesperson for the government’s heritage watchdog told the AJ of the emerging third proposal: ‘We have seen a revised scheme designed by Herzog & de Meuron, but it has not been submitted as a formal proposal and we have not provided advice on it.’Advertisement The C20 Society added that it ‘can confirm that it has been in pre-app consultation with both ACME and Herzog & de Meuron regarding the various schemes in development for Liverpool Street Station’. The body, which campaigns to protect 20th century buildings, added: ‘We will provide a full statement once all plans have been scrutinised.’ In November, the Tate Modern architects appeared to be off the job following the appointment by Network Rail of Shoreditch-based ACME, which came up with an alternative scheme featuring slightly smaller office towers as part of the planned above-station development. The ACME plan for Liverpool Street includes an above-station office development that would rise to 18 storeys, with balcony spaces on the 10th to 17th storeys and outdoor garden terraces from the 14th to 17th storeys. These proposals are marginally shorter than Herzog & de Meuron’s original 15 and 21-storey designs. However, despite these changes, ACME and Network Rail’s scheme has recently seen criticism by the Victorian Society, which told the AJ last month that it would object to the ACME scheme, claiming the above-station development ‘would be hugely damaging to Liverpool Street Station and the wider historic environment of the City of London’. In September last year, Sellar confirmed that that Herzog & de Meuron was working on an amended proposal, as the AJ revealed at the time. However, it is unclear if the latest, third option is related to that work. While both applications introduce more escalators down to platform level and accessibility improvements, the Herzog & de Meuron scheme proved controversial because of planned changes to the inside of the Grade II*-listed former Great Eastern Hotel building above the concourse, which would have seen the hotel relocate to new-build elements. The Swiss architect’s original proposals would have also removed much of the 1992 additions to the concourse by British Rail’s last chief architect, Nick Derbyshire – which had not been included in the original 1975 listing for Liverpool Street station. Historic England listed that part of the station in late 2022 after a first consultation on the Herzog & de Meuron plans. Network Rail told the AJ that it remains ‘fully committed’ to the ACME plan, which was validated only last month. Herzog & de Meuron referred the AJ to Sellar for comment. Sellar declined to comment. ACME has been approached for comment. #exclusive #herzog #meuron #working #allnew
    WWW.ARCHITECTSJOURNAL.CO.UK
    Exclusive: Herzog and de Meuron working on all-new rival Liverpool Street plans
    The Swiss architects first submitted controversial plans to overhaul the Grade II-listed terminus in the City of London in May 2023 on behalf of Sellar and Network Rail. Now the AJ understands the practice is drawing up a rival scheme, separate to its original proposal, which is effectively a third but as yet unseen design for the station and a development above it. ACME, on behalf of Network Rail, submitted its own proposals in April after Network Rail appointed the Shoreditch practice to draw up plans last year. This put the brakes on Herzog & de Meuron’s 2023 scheme, which had been updated with amendments in 2024 in response to criticism over heritage harm – though the application was never withdrawn and remains live on the City of London’s planning portal.Advertisement According to sources, Herzog & de Meuron – still with Sellar’s backing – is actively working on a fresh scheme with ‘much less demolition’, which could rival ACME’s plans as well as its own 2023 scheme. SAVE Britain’s Heritage, which the AJ understands is among several bodies to have been consulted on the ‘third’ scheme for Liverpool Street station, told the AJ: ‘There are now potentially three live schemes for the same site. ‘However, what is interesting about Sellar’s latest proposal is that it involves much less demolition of the station. Network Rail and their current favoured architect, ACME, would do well to take note.’ Historic England, which strongly opposed the original Herzog & de Meuron scheme, is understood to have been shown the Swiss architects’ latest proposals in March. The heritage body’s official response to the ACME scheme has not yet been made public. A spokesperson for the government’s heritage watchdog told the AJ of the emerging third proposal: ‘We have seen a revised scheme designed by Herzog & de Meuron, but it has not been submitted as a formal proposal and we have not provided advice on it.’Advertisement The C20 Society added that it ‘can confirm that it has been in pre-app consultation with both ACME and Herzog & de Meuron regarding the various schemes in development for Liverpool Street Station’. The body, which campaigns to protect 20th century buildings, added: ‘We will provide a full statement once all plans have been scrutinised.’ In November, the Tate Modern architects appeared to be off the job following the appointment by Network Rail of Shoreditch-based ACME, which came up with an alternative scheme featuring slightly smaller office towers as part of the planned above-station development. The ACME plan for Liverpool Street includes an above-station office development that would rise to 18 storeys, with balcony spaces on the 10th to 17th storeys and outdoor garden terraces from the 14th to 17th storeys. These proposals are marginally shorter than Herzog & de Meuron’s original 15 and 21-storey designs. However, despite these changes, ACME and Network Rail’s scheme has recently seen criticism by the Victorian Society, which told the AJ last month that it would object to the ACME scheme, claiming the above-station development ‘would be hugely damaging to Liverpool Street Station and the wider historic environment of the City of London’. In September last year, Sellar confirmed that that Herzog & de Meuron was working on an amended proposal, as the AJ revealed at the time. However, it is unclear if the latest, third option is related to that work. While both applications introduce more escalators down to platform level and accessibility improvements, the Herzog & de Meuron scheme proved controversial because of planned changes to the inside of the Grade II*-listed former Great Eastern Hotel building above the concourse, which would have seen the hotel relocate to new-build elements. The Swiss architect’s original proposals would have also removed much of the 1992 additions to the concourse by British Rail’s last chief architect, Nick Derbyshire – which had not been included in the original 1975 listing for Liverpool Street station. Historic England listed that part of the station in late 2022 after a first consultation on the Herzog & de Meuron plans. Network Rail told the AJ that it remains ‘fully committed’ to the ACME plan, which was validated only last month. Herzog & de Meuron referred the AJ to Sellar for comment. Sellar declined to comment. ACME has been approached for comment.
    0 Yorumlar 0 hisse senetleri
  • As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion

    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments

    Image credit: Disney / Epic Games

    Opinion

    by Rob Fahey
    Contributing Editor

    Published on June 13, 2025

    In some regards, the past couple of weeks have felt rather reassuring.
    We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
    It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
    If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
    In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool
    I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
    The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.

    If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation.
    The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation.
    In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
    To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
    The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
    AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
    That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
    Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
    Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
    Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues.

    Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
    Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
    Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
    The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights.
    Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
    This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
    The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues.
    The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
    Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time.
    Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
    The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
    Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    #faces #court #challenges #disney #universal
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation. In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights. Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions. #faces #court #challenges #disney #universal
    WWW.GAMESINDUSTRY.BIZ
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you're an individual or a smaller company, it's entirely the Wild West out there as regards your IP rights). In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else's copyright with it, but generally can't impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect). Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    0 Yorumlar 0 hisse senetleri
  • FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY

    By TREVOR HOGG

    Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.”

    A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.”

    Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.”

    Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.”

    Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.”

    Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.”

    Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.”

    “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.”

    Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.”

    Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.”

    “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’”

    Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.”

    Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh.

    Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.”
    —Dan Mindel, Cinematographer, Twisters

    Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    #set #pixels #cinematic #artists #come
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.” #set #pixels #cinematic #artists #come
    WWW.VFXVOICE.COM
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuve (Dune: Part Two) finds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track. (Image courtesy of Warner Bros. Pictures) If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation. [VFX Supervisor] Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects is [Cinematographer] Roger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey. (Image courtesy of Paramount Pictures) Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow. (Image courtesy of Apple Studios) One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it. (Image courtesy of Paramount Pictures) Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters. (Image courtesy of Universal Pictures) Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, and [they] create a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky. (Image courtesy of Prime Video) Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles. (Image courtesy of Columbia Pictures) Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline. (Image courtesy of HBO) Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats. (Image courtesy of Universal Pictures) For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef. (Image courtesy of Netflix) Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once. (Image courtesy of A24) Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. For [the 2026 Netflix limited series] East of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well. [Director] Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise. (Image courtesy of HBO) Bluescreen and stunt doubles on Twisters. (Image courtesy of Universal Pictures) “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    Like
    Love
    Wow
    Sad
    Angry
    634
    0 Yorumlar 0 hisse senetleri
  • The Nintendo Switch 2 is out today – here’s everything you need to know

    Since its announcement in January, anticipation has been building for the Nintendo Switch 2 – the followup to the gaming titan’s most successful home console, the 150m-selling Nintendo Switch. Major console launches are rarer than they used to be; this is the first since 2020, when Sony’s PlayStation 5 hit shelves. Whether you’re weighing up a purchase or just wondering what all the fuss is about, here’s everything you need to know.The basicsThe Switch 2 is out today, 5 June, priced at £395.99or at £429.99bundled with its flagship game, Mario Kart World. Like its predecessor, it’s a portable games machine with a built-in screen – you can use as a handheld mini-console when you’re out and about, or slide it into the dedicated dock device and plug it into your TV via an HDMI cable for a big-screen experience at home. A little bigger than the original Switch, with a crisp, clear 7.9in LCD touch screen, as opposed to the old 6.2in display, it comes with two Joy-Con controllers, which are chunkier than the previous versions. These now attach magnetically to each side of the screen with a pleasing clunk, replacing the fiddly sliding mechanism that most Switch owners disliked. They’ve also got bigger L and R buttons on the top, which sounds like a minor detail but is a huge deal for anyone trying to perfect their Mario Kart power-slides.The specBig tech advances … Nintendo Switch 2. Photograph: NintendoThe tech inside the Switch 2 is a lot more advanced than the previous console, featuring a custom nVidia processor, and a screen capable of displaying at 4K resolutionor 1920x1080 resolution in portable mode. It’s also got 5.1 surround sound, and supports high-dynamic range lightinggraphical effects at frame rates of up to 120hz. This brings the Switch 2 almost up to scratch with other modern consoles: most experts are placing its tech specs somewhere between the PS4 and PS5, or between Xbox One and Xbox Series X.In the boxThe Nintendo Switch 2 comes with the console itself, two Joy-Con controllers, a power adaptor and USB-C charging cable, a dock, a Joy-Con grip, and two Joy-Con wrist straps to stop them flying out of your hands.Out of the boxNintendo is going big on the social features of the console. Its GameShare function will allow you to play compatible games with other people who don’t own a copy – they just need their own Switch or a Switch 2, and can play along in the room with you or connect online. This is particularly important for families sharing one copy of a game. Meanwhile, GameChat is kind of like Zoom, but for games: you can invite a bunch of pals into a group video chat session where you can talk to each other while playing the same game, playing different games, or just hanging out. If you all buy the Nintendo Switch 2 Camera you’ll be able to see little video windows of each other on the screen, too. GameChat requires a paid subscription to Nintendo’s online gaming service, which costs £17.99The gamesBig news … Mario Kart World game. Photograph: NintendoThe console is launching with around 25 games, though many of these are enhanced versions of older Switch titles. The big newcomers are Mario Kart World, an open-world take on the classic karting game; the introductory game Nintendo Switch 2 Welcome Tour; the co-op survival challenge Survival Kids and anti-gravity racer, Fast Fusion. Some favourites making it across are Fortnite, Cyberpunk 2077 and The Legend of Zelda: Breath of the Wild/Tears of the Kingdom. Most games will retail for between £45–£70 and will be available to buy and download online, or as physical boxed copies. You can also still play almost all your old Switch games on the new console, and there’s a huge back catalogue of retro NES, Nintendo 64, SNES and GameCube classics from the 1980s, 90s and 00s available to play with a Nintendo Switch Online subscription.The accessoriesAdd-ons … Nintendo Switch 2 Pro controller and camera. Photograph: NintendoThere are three things you may want to buy alongside the console. The Nintendo Switch 2 Pro controller is a traditional console joypad intended for serious play. Then you have the Nintendo Switch 2 camera, basically a webcam compatible with the GameChat service, but also with any games that might use camera features. You may also want a microSD Express card to provide additional storage for your games.Where can I buy one?If you haven’t pre-ordered, you may have to be patient and shop around. Some of the larger retailers including Amazon, Argos, Currys and John Lewis are saying they may have a few in stock today and it’s worth trying Nintendo’s online store. Be extremely wary of buying from private sellers on eBay or similar sites – there will be a lot of con artists out there. Remember when people found their PlayStation 5 deliveries were instead full of bags of rice?
    #nintendo #switch #out #today #heres
    The Nintendo Switch 2 is out today – here’s everything you need to know
    Since its announcement in January, anticipation has been building for the Nintendo Switch 2 – the followup to the gaming titan’s most successful home console, the 150m-selling Nintendo Switch. Major console launches are rarer than they used to be; this is the first since 2020, when Sony’s PlayStation 5 hit shelves. Whether you’re weighing up a purchase or just wondering what all the fuss is about, here’s everything you need to know.The basicsThe Switch 2 is out today, 5 June, priced at £395.99or at £429.99bundled with its flagship game, Mario Kart World. Like its predecessor, it’s a portable games machine with a built-in screen – you can use as a handheld mini-console when you’re out and about, or slide it into the dedicated dock device and plug it into your TV via an HDMI cable for a big-screen experience at home. A little bigger than the original Switch, with a crisp, clear 7.9in LCD touch screen, as opposed to the old 6.2in display, it comes with two Joy-Con controllers, which are chunkier than the previous versions. These now attach magnetically to each side of the screen with a pleasing clunk, replacing the fiddly sliding mechanism that most Switch owners disliked. They’ve also got bigger L and R buttons on the top, which sounds like a minor detail but is a huge deal for anyone trying to perfect their Mario Kart power-slides.The specBig tech advances … Nintendo Switch 2. Photograph: NintendoThe tech inside the Switch 2 is a lot more advanced than the previous console, featuring a custom nVidia processor, and a screen capable of displaying at 4K resolutionor 1920x1080 resolution in portable mode. It’s also got 5.1 surround sound, and supports high-dynamic range lightinggraphical effects at frame rates of up to 120hz. This brings the Switch 2 almost up to scratch with other modern consoles: most experts are placing its tech specs somewhere between the PS4 and PS5, or between Xbox One and Xbox Series X.In the boxThe Nintendo Switch 2 comes with the console itself, two Joy-Con controllers, a power adaptor and USB-C charging cable, a dock, a Joy-Con grip, and two Joy-Con wrist straps to stop them flying out of your hands.Out of the boxNintendo is going big on the social features of the console. Its GameShare function will allow you to play compatible games with other people who don’t own a copy – they just need their own Switch or a Switch 2, and can play along in the room with you or connect online. This is particularly important for families sharing one copy of a game. Meanwhile, GameChat is kind of like Zoom, but for games: you can invite a bunch of pals into a group video chat session where you can talk to each other while playing the same game, playing different games, or just hanging out. If you all buy the Nintendo Switch 2 Camera you’ll be able to see little video windows of each other on the screen, too. GameChat requires a paid subscription to Nintendo’s online gaming service, which costs £17.99The gamesBig news … Mario Kart World game. Photograph: NintendoThe console is launching with around 25 games, though many of these are enhanced versions of older Switch titles. The big newcomers are Mario Kart World, an open-world take on the classic karting game; the introductory game Nintendo Switch 2 Welcome Tour; the co-op survival challenge Survival Kids and anti-gravity racer, Fast Fusion. Some favourites making it across are Fortnite, Cyberpunk 2077 and The Legend of Zelda: Breath of the Wild/Tears of the Kingdom. Most games will retail for between £45–£70 and will be available to buy and download online, or as physical boxed copies. You can also still play almost all your old Switch games on the new console, and there’s a huge back catalogue of retro NES, Nintendo 64, SNES and GameCube classics from the 1980s, 90s and 00s available to play with a Nintendo Switch Online subscription.The accessoriesAdd-ons … Nintendo Switch 2 Pro controller and camera. Photograph: NintendoThere are three things you may want to buy alongside the console. The Nintendo Switch 2 Pro controller is a traditional console joypad intended for serious play. Then you have the Nintendo Switch 2 camera, basically a webcam compatible with the GameChat service, but also with any games that might use camera features. You may also want a microSD Express card to provide additional storage for your games.Where can I buy one?If you haven’t pre-ordered, you may have to be patient and shop around. Some of the larger retailers including Amazon, Argos, Currys and John Lewis are saying they may have a few in stock today and it’s worth trying Nintendo’s online store. Be extremely wary of buying from private sellers on eBay or similar sites – there will be a lot of con artists out there. Remember when people found their PlayStation 5 deliveries were instead full of bags of rice? #nintendo #switch #out #today #heres
    WWW.THEGUARDIAN.COM
    The Nintendo Switch 2 is out today – here’s everything you need to know
    Since its announcement in January, anticipation has been building for the Nintendo Switch 2 – the followup to the gaming titan’s most successful home console, the 150m-selling Nintendo Switch. Major console launches are rarer than they used to be; this is the first since 2020, when Sony’s PlayStation 5 hit shelves. Whether you’re weighing up a purchase or just wondering what all the fuss is about, here’s everything you need to know.The basicsThe Switch 2 is out today, 5 June, priced at £395.99 (US$449.99/A$699/€469.99) or at £429.99 (US$499.99/A$766/€509,99) bundled with its flagship game, Mario Kart World. Like its predecessor, it’s a portable games machine with a built-in screen – you can use as a handheld mini-console when you’re out and about, or slide it into the dedicated dock device and plug it into your TV via an HDMI cable for a big-screen experience at home. A little bigger than the original Switch, with a crisp, clear 7.9in LCD touch screen, as opposed to the old 6.2in display, it comes with two Joy-Con controllers, which are chunkier than the previous versions. These now attach magnetically to each side of the screen with a pleasing clunk, replacing the fiddly sliding mechanism that most Switch owners disliked. They’ve also got bigger L and R buttons on the top, which sounds like a minor detail but is a huge deal for anyone trying to perfect their Mario Kart power-slides.The specBig tech advances … Nintendo Switch 2. Photograph: NintendoThe tech inside the Switch 2 is a lot more advanced than the previous console, featuring a custom nVidia processor, and a screen capable of displaying at 4K resolution (when plugged into a compatible TV) or 1920x1080 resolution in portable mode. It’s also got 5.1 surround sound, and supports high-dynamic range lighting (HDR) graphical effects at frame rates of up to 120hz. This brings the Switch 2 almost up to scratch with other modern consoles: most experts are placing its tech specs somewhere between the PS4 and PS5, or between Xbox One and Xbox Series X.In the boxThe Nintendo Switch 2 comes with the console itself, two Joy-Con controllers, a power adaptor and USB-C charging cable, a dock, a Joy-Con grip (which allows you to connect the two Joy-Cons together to create a traditional-looking games controller), and two Joy-Con wrist straps to stop them flying out of your hands.Out of the boxNintendo is going big on the social features of the console. Its GameShare function will allow you to play compatible games with other people who don’t own a copy – they just need their own Switch or a Switch 2, and can play along in the room with you or connect online. This is particularly important for families sharing one copy of a game. Meanwhile, GameChat is kind of like Zoom, but for games: you can invite a bunch of pals into a group video chat session where you can talk to each other while playing the same game, playing different games, or just hanging out. If you all buy the Nintendo Switch 2 Camera you’ll be able to see little video windows of each other on the screen, too. GameChat requires a paid subscription to Nintendo’s online gaming service, which costs £17.99 (US$19.99/€19.99/A$29.95)The gamesBig news … Mario Kart World game. Photograph: NintendoThe console is launching with around 25 games, though many of these are enhanced versions of older Switch titles. The big newcomers are Mario Kart World, an open-world take on the classic karting game; the introductory game Nintendo Switch 2 Welcome Tour; the co-op survival challenge Survival Kids and anti-gravity racer, Fast Fusion. Some favourites making it across are Fortnite, Cyberpunk 2077 and The Legend of Zelda: Breath of the Wild/Tears of the Kingdom. Most games will retail for between £45–£70 and will be available to buy and download online, or as physical boxed copies. You can also still play almost all your old Switch games on the new console, and there’s a huge back catalogue of retro NES, Nintendo 64, SNES and GameCube classics from the 1980s, 90s and 00s available to play with a Nintendo Switch Online subscription.The accessoriesAdd-ons … Nintendo Switch 2 Pro controller and camera. Photograph: NintendoThere are three things you may want to buy alongside the console. The Nintendo Switch 2 Pro controller is a traditional console joypad intended for serious play. Then you have the Nintendo Switch 2 camera, basically a webcam compatible with the GameChat service, but also with any games that might use camera features. You may also want a microSD Express card to provide additional storage for your games.Where can I buy one?If you haven’t pre-ordered, you may have to be patient and shop around. Some of the larger retailers including Amazon, Argos, Currys and John Lewis are saying they may have a few in stock today and it’s worth trying Nintendo’s online store. Be extremely wary of buying from private sellers on eBay or similar sites – there will be a lot of con artists out there. Remember when people found their PlayStation 5 deliveries were instead full of bags of rice?
    Like
    Love
    Wow
    Sad
    Angry
    335
    0 Yorumlar 0 hisse senetleri
  • Big government is still good, even with Trump in power

    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau, a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap whereare all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people. And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More:
    #big #government #still #good #even
    Big government is still good, even with Trump in power
    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau, a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap whereare all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people. And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More: #big #government #still #good #even
    WWW.VOX.COM
    Big government is still good, even with Trump in power
    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau (CFPB), a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap where [private lenders] are all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people (and businesses). And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More:
    Like
    Love
    Wow
    Angry
    Sad
    257
    0 Yorumlar 0 hisse senetleri
  • Nintendo’s Switch 2 is the upgrade of my dreams – but it’s not as ‘new’ as some might hope

    Launch week is finally here, and though I would love to be bringing you a proper review of the Nintendo Switch 2 right now, I still don’t have one at the time of writing. In its wisdom, Nintendo has decided not to send review units out until the day before release, so as you read this I will be standing impatiently by the door like a dog anxiously awaiting its owner.I have played the console, though, for a whole day at Nintendo’s offices, so I can give you some first impressions. Hardware-wise, it is the upgrade of my dreams: sturdier JoyCons, a beautiful screen, the graphical muscle to make games look as good as I want them to in 2025. I like the understated pops of colour on the controllers, the refined menu with its soothing chimes and blips. Game sharing, online functionality and other basic stuff is frictionless now. I love that Nintendo Switch Online is so reasonably priced, at £18 a year, as opposed to about the same per month for comparable gaming services, and it gives me access to a treasure trove of Nintendo games from decades past.But here’s the key word in that paragraph: it’s an upgrade. After eight years, an upgrade feels rather belated. I was hoping for something actually new, and aside from the fact that you can now use those controllers as mice by turning them sideways and moving them around on a desk or on your lap, there isn’t much new in the Switch 2. Absorbed in Mario Kart World, the main launch title, it was easy to forget I was even playing a new console. I do wonder – as I did in January – whether many less gaming-literate families who own a Switch will see a reason to upgrade, given the £400 asking price.Brilliant … Mario Kart World. Photograph: NintendoSpeaking of Mario Kart World, though: it’s brilliant. Totally splendid. It will deservedly sell squillions. Alongside the classic competitive grand prix and time trial races, the headline feature is an open, driveable world that you can explore all you like, as any character, picking up characters and costumes and collectibles, and getting into elimination-style races that span the full continent. All the courses are part of one huge map, and they flow right into one another.Your kart transforms helpfully into a boat when you hit water, and I found an island with a really tricky challenge where I had to ride seaplanes up towards a skyscraper in the city, driving over their wings from one to the other. Anyone could lose hours driving aimlessly around the colourful collection of mountains, jungles and winding motorways here. There’s even a space-station themed course that cleverly echoes the original Donkey Kong arcade game, delivering a nostalgia hit as delightful as Super Mario Odyssey’s climactic New Donk City festival.Pushing Buttons correspondent Keith Stuart also had a great time with another launch game, Konami’s Survival Kids, which is a bit like Overcooked except all the players are working together to survive on a desert island.However: I would steer clear of the Nintendo Switch Welcome Tour, an almost belligerently un-fun interactive tour of the console’s new features … that costs £7.99. Your tiny avatar walks around a gigantic recreation of a Switch 2 console, looking for invisible plaques that point out its different components. There are displays with uninteresting technical information about, say, the quality of the console’s HD rumble. One of the interactive museum displays shows a ball bounding across the screen and asks you to guess how many frames per second it is travelling at. As someone who aggressively does not care about fine technical detail, I was terrible at this. It’s like being on the least interesting school trip of your life.And it felt felt remarkably un-Nintendo, so dry and devoid of personality that it made me a little worried. Nintendo Labo, by contrast, was a super-fun and accessible way of showing off the original Switch’s technical features. I had assumed that Welcome Tour would be made by the same team, but evidently not.I couldn’t wait to get back to Mario Kart World, which, once again, is fantastic. I’m excited to spend the rest of the week playing it for a proper review. And if you’ve pre-ordered a Switch 2, you’ll have it in your hands in the next 24 hours. For those holding off: we’ll have plenty more Switch 2 info and opinions in the next few weeks to help you make a decision.What to playArms akimbo … to a T is funny and weird. Illustration: Annapurna interactive/SteamLast week I played through to a T, the beautifully strange, unexpectedly thoughtful new game from Katamari Damacy creator Keita Takahashi. It is about a young teenager who is forever stuck in a T-pose, arms akimbo. As you might imagine, this makes life rather difficult for them, and they must rely on their fluffy little dog to help them through life. It’s a kid-friendly game about accepting who you are – I played it with my sons – but it is also extremely funny and weird, and features a song about a giraffe who loves to make sandwiches. I love a game where you don’t know what to expect, and I bet that if I asked every single reader of this newsletter to guess how it ends, not one of you would be anywhere close.Available on: PS5, Xbox, PC
    Estimated playtime: What to readTake chances … Remy Siuand Nhi Do accept the Peabody award for 1000xRESIST. Photograph: Charley Gallay/Getty images

    1000xRESIST, last year’s critical darling sci-fi game about the immigrant experience and the cost of political resistance, won a Peabody award this week. From the creators’ acceptance speech: “I want to say to the games industry, resource those on the margins and seek difference. Take chances again and again. This art form is barely unearthed. It’s too early to define it. Fund the indescribable.”

    Keith Stuart wrote about the largely lost age of midnight launch parties – for the Switch 2 launch, only Smyths Toys is hosting midnight releases. Did you ever go to one of these events? Write in and tell me if so – I remember feeling intensely embarrassed queuing for a Wii on Edinburgh’s Princes Street as a teenager.

    The developers of OpenAI are very proud that their latest artificial “intelligence” model can play Pokémon Red. It’s terrible at it, and has so far taken more than 80 hours to obtain three gym badges. I’m trying not to think about the environmental cost of proving AI is terrible at video games.

    When Imran Khan had a stroke last year, he lost the ability to play games. I found this essay about the role that Kaizo Marioplayed in his recovery extremely moving.
    skip past newsletter promotionSign up to Pushing ButtonsFree weekly newsletterKeza MacDonald's weekly look at the world of gamingPrivacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotionWhat to clickQuestion BlockSoothing … Unpacking. Illustration: Humble Games/SteamReader Gemma asks:“At this moment I am cuddling my three-month-old as he naps on the sofa while I’m playing Blue Prince. It might be the best postnatal game: it has very little background sound or music; can be paused any time; is very chill with zero jeopardy; but also has a fascinating storyline and incredible puzzles. I also find myself narrating the letters and talking out loud for the maths puzzles.Your articlemade me feel less guilty, so thank you. Any other updated tips for similar games that you’ve discovered in the last eight years for postnatal gaming?”In the small-baby years I played two types of games: five-hour ones that I could complete in a couple of evenings, or endless Stardew Valley/Animal Crossing-type games where you could just drop in and zone out for as long as you needed, and it didn’t matter whether you were “achieving” anything. I couldn’t play anything with a linear plot because my brain was often mush and I’d simply forget what had happened an hour ago. It’s different for everyone, though – my friend Sarah was obsessed with Grand Theft Auto when her baby was wee.I became hooked on a couple of exploitative phone games that I won’t recommend – don’t go near those in a vulnerable brain-state, you’ll end up spending hours and £££ on virtual gems to buy dopamine with. Something like Unpacking or A Little to the Left might be soothing for a puzzle-brain like yours. I’ll throw this out there to other gamer mums: what did you play in the early months of parenthood?If you’ve got a question for Question Block – or anything else to say about the newsletter – email us on pushingbuttons@theguardian.com.
    #nintendos #switch #upgrade #dreams #but
    Nintendo’s Switch 2 is the upgrade of my dreams – but it’s not as ‘new’ as some might hope
    Launch week is finally here, and though I would love to be bringing you a proper review of the Nintendo Switch 2 right now, I still don’t have one at the time of writing. In its wisdom, Nintendo has decided not to send review units out until the day before release, so as you read this I will be standing impatiently by the door like a dog anxiously awaiting its owner.I have played the console, though, for a whole day at Nintendo’s offices, so I can give you some first impressions. Hardware-wise, it is the upgrade of my dreams: sturdier JoyCons, a beautiful screen, the graphical muscle to make games look as good as I want them to in 2025. I like the understated pops of colour on the controllers, the refined menu with its soothing chimes and blips. Game sharing, online functionality and other basic stuff is frictionless now. I love that Nintendo Switch Online is so reasonably priced, at £18 a year, as opposed to about the same per month for comparable gaming services, and it gives me access to a treasure trove of Nintendo games from decades past.But here’s the key word in that paragraph: it’s an upgrade. After eight years, an upgrade feels rather belated. I was hoping for something actually new, and aside from the fact that you can now use those controllers as mice by turning them sideways and moving them around on a desk or on your lap, there isn’t much new in the Switch 2. Absorbed in Mario Kart World, the main launch title, it was easy to forget I was even playing a new console. I do wonder – as I did in January – whether many less gaming-literate families who own a Switch will see a reason to upgrade, given the £400 asking price.Brilliant … Mario Kart World. Photograph: NintendoSpeaking of Mario Kart World, though: it’s brilliant. Totally splendid. It will deservedly sell squillions. Alongside the classic competitive grand prix and time trial races, the headline feature is an open, driveable world that you can explore all you like, as any character, picking up characters and costumes and collectibles, and getting into elimination-style races that span the full continent. All the courses are part of one huge map, and they flow right into one another.Your kart transforms helpfully into a boat when you hit water, and I found an island with a really tricky challenge where I had to ride seaplanes up towards a skyscraper in the city, driving over their wings from one to the other. Anyone could lose hours driving aimlessly around the colourful collection of mountains, jungles and winding motorways here. There’s even a space-station themed course that cleverly echoes the original Donkey Kong arcade game, delivering a nostalgia hit as delightful as Super Mario Odyssey’s climactic New Donk City festival.Pushing Buttons correspondent Keith Stuart also had a great time with another launch game, Konami’s Survival Kids, which is a bit like Overcooked except all the players are working together to survive on a desert island.However: I would steer clear of the Nintendo Switch Welcome Tour, an almost belligerently un-fun interactive tour of the console’s new features … that costs £7.99. Your tiny avatar walks around a gigantic recreation of a Switch 2 console, looking for invisible plaques that point out its different components. There are displays with uninteresting technical information about, say, the quality of the console’s HD rumble. One of the interactive museum displays shows a ball bounding across the screen and asks you to guess how many frames per second it is travelling at. As someone who aggressively does not care about fine technical detail, I was terrible at this. It’s like being on the least interesting school trip of your life.And it felt felt remarkably un-Nintendo, so dry and devoid of personality that it made me a little worried. Nintendo Labo, by contrast, was a super-fun and accessible way of showing off the original Switch’s technical features. I had assumed that Welcome Tour would be made by the same team, but evidently not.I couldn’t wait to get back to Mario Kart World, which, once again, is fantastic. I’m excited to spend the rest of the week playing it for a proper review. And if you’ve pre-ordered a Switch 2, you’ll have it in your hands in the next 24 hours. For those holding off: we’ll have plenty more Switch 2 info and opinions in the next few weeks to help you make a decision.What to playArms akimbo … to a T is funny and weird. Illustration: Annapurna interactive/SteamLast week I played through to a T, the beautifully strange, unexpectedly thoughtful new game from Katamari Damacy creator Keita Takahashi. It is about a young teenager who is forever stuck in a T-pose, arms akimbo. As you might imagine, this makes life rather difficult for them, and they must rely on their fluffy little dog to help them through life. It’s a kid-friendly game about accepting who you are – I played it with my sons – but it is also extremely funny and weird, and features a song about a giraffe who loves to make sandwiches. I love a game where you don’t know what to expect, and I bet that if I asked every single reader of this newsletter to guess how it ends, not one of you would be anywhere close.Available on: PS5, Xbox, PC Estimated playtime: What to readTake chances … Remy Siuand Nhi Do accept the Peabody award for 1000xRESIST. Photograph: Charley Gallay/Getty images 1000xRESIST, last year’s critical darling sci-fi game about the immigrant experience and the cost of political resistance, won a Peabody award this week. From the creators’ acceptance speech: “I want to say to the games industry, resource those on the margins and seek difference. Take chances again and again. This art form is barely unearthed. It’s too early to define it. Fund the indescribable.” Keith Stuart wrote about the largely lost age of midnight launch parties – for the Switch 2 launch, only Smyths Toys is hosting midnight releases. Did you ever go to one of these events? Write in and tell me if so – I remember feeling intensely embarrassed queuing for a Wii on Edinburgh’s Princes Street as a teenager. The developers of OpenAI are very proud that their latest artificial “intelligence” model can play Pokémon Red. It’s terrible at it, and has so far taken more than 80 hours to obtain three gym badges. I’m trying not to think about the environmental cost of proving AI is terrible at video games. When Imran Khan had a stroke last year, he lost the ability to play games. I found this essay about the role that Kaizo Marioplayed in his recovery extremely moving. skip past newsletter promotionSign up to Pushing ButtonsFree weekly newsletterKeza MacDonald's weekly look at the world of gamingPrivacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotionWhat to clickQuestion BlockSoothing … Unpacking. Illustration: Humble Games/SteamReader Gemma asks:“At this moment I am cuddling my three-month-old as he naps on the sofa while I’m playing Blue Prince. It might be the best postnatal game: it has very little background sound or music; can be paused any time; is very chill with zero jeopardy; but also has a fascinating storyline and incredible puzzles. I also find myself narrating the letters and talking out loud for the maths puzzles.Your articlemade me feel less guilty, so thank you. Any other updated tips for similar games that you’ve discovered in the last eight years for postnatal gaming?”In the small-baby years I played two types of games: five-hour ones that I could complete in a couple of evenings, or endless Stardew Valley/Animal Crossing-type games where you could just drop in and zone out for as long as you needed, and it didn’t matter whether you were “achieving” anything. I couldn’t play anything with a linear plot because my brain was often mush and I’d simply forget what had happened an hour ago. It’s different for everyone, though – my friend Sarah was obsessed with Grand Theft Auto when her baby was wee.I became hooked on a couple of exploitative phone games that I won’t recommend – don’t go near those in a vulnerable brain-state, you’ll end up spending hours and £££ on virtual gems to buy dopamine with. Something like Unpacking or A Little to the Left might be soothing for a puzzle-brain like yours. I’ll throw this out there to other gamer mums: what did you play in the early months of parenthood?If you’ve got a question for Question Block – or anything else to say about the newsletter – email us on pushingbuttons@theguardian.com. #nintendos #switch #upgrade #dreams #but
    WWW.THEGUARDIAN.COM
    Nintendo’s Switch 2 is the upgrade of my dreams – but it’s not as ‘new’ as some might hope
    Launch week is finally here, and though I would love to be bringing you a proper review of the Nintendo Switch 2 right now, I still don’t have one at the time of writing. In its wisdom, Nintendo has decided not to send review units out until the day before release, so as you read this I will be standing impatiently by the door like a dog anxiously awaiting its owner.I have played the console, though, for a whole day at Nintendo’s offices, so I can give you some first impressions. Hardware-wise, it is the upgrade of my dreams: sturdier JoyCons, a beautiful screen, the graphical muscle to make games look as good as I want them to in 2025 (though still not comparable to the high-end PlayStation 5 Pro or a modern gaming PC). I like the understated pops of colour on the controllers, the refined menu with its soothing chimes and blips. Game sharing, online functionality and other basic stuff is frictionless now. I love that Nintendo Switch Online is so reasonably priced, at £18 a year, as opposed to about the same per month for comparable gaming services, and it gives me access to a treasure trove of Nintendo games from decades past.But here’s the key word in that paragraph: it’s an upgrade. After eight years, an upgrade feels rather belated. I was hoping for something actually new, and aside from the fact that you can now use those controllers as mice by turning them sideways and moving them around on a desk or on your lap, there isn’t much new in the Switch 2. Absorbed in Mario Kart World, the main launch title, it was easy to forget I was even playing a new console. I do wonder – as I did in January – whether many less gaming-literate families who own a Switch will see a reason to upgrade, given the £400 asking price.Brilliant … Mario Kart World. Photograph: NintendoSpeaking of Mario Kart World, though: it’s brilliant. Totally splendid. It will deservedly sell squillions. Alongside the classic competitive grand prix and time trial races, the headline feature is an open, driveable world that you can explore all you like, as any character, picking up characters and costumes and collectibles, and getting into elimination-style races that span the full continent. All the courses are part of one huge map, and they flow right into one another.Your kart transforms helpfully into a boat when you hit water, and I found an island with a really tricky challenge where I had to ride seaplanes up towards a skyscraper in the city, driving over their wings from one to the other. Anyone could lose hours driving aimlessly around the colourful collection of mountains, jungles and winding motorways here. There’s even a space-station themed course that cleverly echoes the original Donkey Kong arcade game, delivering a nostalgia hit as delightful as Super Mario Odyssey’s climactic New Donk City festival.Pushing Buttons correspondent Keith Stuart also had a great time with another launch game, Konami’s Survival Kids, which is a bit like Overcooked except all the players are working together to survive on a desert island. (Be reassured, if you generally find survival games hard work: it’s very much fun over peril.)However: I would steer clear of the Nintendo Switch Welcome Tour, an almost belligerently un-fun interactive tour of the console’s new features … that costs £7.99. Your tiny avatar walks around a gigantic recreation of a Switch 2 console, looking for invisible plaques that point out its different components. There are displays with uninteresting technical information about, say, the quality of the console’s HD rumble. One of the interactive museum displays shows a ball bounding across the screen and asks you to guess how many frames per second it is travelling at. As someone who aggressively does not care about fine technical detail, I was terrible at this. It’s like being on the least interesting school trip of your life.And it felt felt remarkably un-Nintendo, so dry and devoid of personality that it made me a little worried. Nintendo Labo, by contrast, was a super-fun and accessible way of showing off the original Switch’s technical features. I had assumed that Welcome Tour would be made by the same team, but evidently not.I couldn’t wait to get back to Mario Kart World, which, once again, is fantastic. I’m excited to spend the rest of the week playing it for a proper review. And if you’ve pre-ordered a Switch 2, you’ll have it in your hands in the next 24 hours. For those holding off: we’ll have plenty more Switch 2 info and opinions in the next few weeks to help you make a decision.What to playArms akimbo … to a T is funny and weird. Illustration: Annapurna interactive/SteamLast week I played through to a T, the beautifully strange, unexpectedly thoughtful new game from Katamari Damacy creator Keita Takahashi. It is about a young teenager who is forever stuck in a T-pose, arms akimbo. As you might imagine, this makes life rather difficult for them, and they must rely on their fluffy little dog to help them through life. It’s a kid-friendly game about accepting who you are – I played it with my sons – but it is also extremely funny and weird, and features a song about a giraffe who loves to make sandwiches. I love a game where you don’t know what to expect, and I bet that if I asked every single reader of this newsletter to guess how it ends, not one of you would be anywhere close.Available on: PS5, Xbox, PC Estimated playtime: What to readTake chances … Remy Siu (left) and Nhi Do accept the Peabody award for 1000xRESIST. Photograph: Charley Gallay/Getty images 1000xRESIST, last year’s critical darling sci-fi game about the immigrant experience and the cost of political resistance, won a Peabody award this week. From the creators’ acceptance speech: “I want to say to the games industry, resource those on the margins and seek difference. Take chances again and again. This art form is barely unearthed. It’s too early to define it. Fund the indescribable.” Keith Stuart wrote about the largely lost age of midnight launch parties – for the Switch 2 launch, only Smyths Toys is hosting midnight releases. Did you ever go to one of these events? Write in and tell me if so – I remember feeling intensely embarrassed queuing for a Wii on Edinburgh’s Princes Street as a teenager. The developers of OpenAI are very proud that their latest artificial “intelligence” model can play Pokémon Red. It’s terrible at it, and has so far taken more than 80 hours to obtain three gym badges. I’m trying not to think about the environmental cost of proving AI is terrible at video games. When Imran Khan had a stroke last year, he lost the ability to play games. I found this essay about the role that Kaizo Mario (super-difficult hacked Mario levels) played in his recovery extremely moving. skip past newsletter promotionSign up to Pushing ButtonsFree weekly newsletterKeza MacDonald's weekly look at the world of gamingPrivacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotionWhat to clickQuestion BlockSoothing … Unpacking. Illustration: Humble Games/SteamReader Gemma asks:“At this moment I am cuddling my three-month-old as he naps on the sofa while I’m playing Blue Prince. It might be the best postnatal game: it has very little background sound or music; can be paused any time; is very chill with zero jeopardy; but also has a fascinating storyline and incredible puzzles. I also find myself narrating the letters and talking out loud for the maths puzzles. (Do three-month-olds understand algebra?) Your article [about Nintendo at naptime] made me feel less guilty, so thank you. Any other updated tips for similar games that you’ve discovered in the last eight years for postnatal gaming?”In the small-baby years I played two types of games: five-hour ones that I could complete in a couple of evenings, or endless Stardew Valley/Animal Crossing-type games where you could just drop in and zone out for as long as you needed, and it didn’t matter whether you were “achieving” anything. I couldn’t play anything with a linear plot because my brain was often mush and I’d simply forget what had happened an hour ago. It’s different for everyone, though – my friend Sarah was obsessed with Grand Theft Auto when her baby was wee.I became hooked on a couple of exploitative phone games that I won’t recommend – don’t go near those in a vulnerable brain-state, you’ll end up spending hours and £££ on virtual gems to buy dopamine with. Something like Unpacking or A Little to the Left might be soothing for a puzzle-brain like yours (and they’re short). I’ll throw this out there to other gamer mums: what did you play in the early months of parenthood?If you’ve got a question for Question Block – or anything else to say about the newsletter – email us on pushingbuttons@theguardian.com.
    Like
    Love
    Wow
    Angry
    Sad
    304
    0 Yorumlar 0 hisse senetleri
  • Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-Dive

    Coolers News Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-DiveJune 2, 2025Last Updated: 2025-06-02Noctua's Computex 2025 showcase includes engineering and design information on their new Thermosiphon cooler and CPU liquid coolerThe HighlightsNoctua shows off its upcoming AIO liquid coolerThe company also shows off its new NF-A12 G2 fanNoctua also discusses its Antec Flux Pro Noctua Edition PC caseTable of ContentsAutoTOC Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!IntroWe visited Noctua’s booth at Computex, where the company showed off its upcoming liquid cooler, which is set to launch in Q1 2026. Once again, we have to give Noctua an award for least RGB LED BS we’ve seen at a trade show as we couldn’t find any in their booth.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangThe company showed off its AIO liquid cooler prototype, which consisted of 3D printed pieces that are intended to go into the pump block to reduce some of the most annoying aspects of liquid coolers with pumps as opposed to thermosiphons. The fan that goes on top of the pump block is an existing Noctua fan that they’ve reshaped the frame for. And it’s optional to mount on top and it projects the air out towards the memory and VRM components. We also looked at the company’s thermosiphon, which was briefly shown at Computex last year. It is a 2-phase thermosiphon, which means that it does a phase change. This makes it comparable to a heat-pipe in a way. We also got to see a bunch of different types of cold plate designs.We also got another look at the Noctua x Antec Flux Pro case, which we previously covered at Antec’s booth.G2 FansNoctua showed off its 120mm G2 fan, which also appears in the shroud top of the Antec Flux Pro case. A couple things have changed about the fan, which include the RPM offset being a little different.Grab a GN Soldering & Project Mat for a high-quality work surface with extreme heat resistance. These purchases directly fund our operation, including our build-out of the hemi-anechoic chamber for our acoustic testing!When we reviewed the NH-D15 G2, the RPM offset between the 2 fans was about 25, but the fans we saw at Computex are about plus or minus 50.  Noctua provided some first-party data and stated that on a 120x49mm water cooler radiator comparing the G2 fan versus the company’s NF-A12x25 fan under a 200W heat-loud, the G2 fan performed roughly 3 degrees cooler, which is really good.    Paired with an air cooler, there was about a 1 degree difference between the 2 fans, which is a lot for an air cooler. Noctua Liquid CoolerFor its liquid cooler, Noctua is working with Asetek, using the company’s Gen 8 V2 platform.  Asetek has been around for a long time and they’re one of the biggest suppliers. In the old days, they worked with Corsair, NZXT, and basically everyone’s stuff.The landscape has diversified a bit. Apaltek has gotten really big as a supplier. For as much s*** we’ve given Asetek over the years, in our experience, they’ve had fewer widespread failures of gunk buildup compared to competing solutions. Noctua MouseWe don’t cover mice, but Noctua also showed off a mouse with a small fan built into it. Noctua's Jakob Dellinger Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We wrapped up our Noctua coverage by interviewing Noctua’s Jakob Dellinger. Make sure to watch our Computex video where we do a deeper dive into the company’s upcoming liquid cooler, how a thermosiphon works, and more.
    #noctua039s #next #big #thing #liquid
    Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-Dive
    Coolers News Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-DiveJune 2, 2025Last Updated: 2025-06-02Noctua's Computex 2025 showcase includes engineering and design information on their new Thermosiphon cooler and CPU liquid coolerThe HighlightsNoctua shows off its upcoming AIO liquid coolerThe company also shows off its new NF-A12 G2 fanNoctua also discusses its Antec Flux Pro Noctua Edition PC caseTable of ContentsAutoTOC Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!IntroWe visited Noctua’s booth at Computex, where the company showed off its upcoming liquid cooler, which is set to launch in Q1 2026. Once again, we have to give Noctua an award for least RGB LED BS we’ve seen at a trade show as we couldn’t find any in their booth.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangThe company showed off its AIO liquid cooler prototype, which consisted of 3D printed pieces that are intended to go into the pump block to reduce some of the most annoying aspects of liquid coolers with pumps as opposed to thermosiphons. The fan that goes on top of the pump block is an existing Noctua fan that they’ve reshaped the frame for. And it’s optional to mount on top and it projects the air out towards the memory and VRM components. We also looked at the company’s thermosiphon, which was briefly shown at Computex last year. It is a 2-phase thermosiphon, which means that it does a phase change. This makes it comparable to a heat-pipe in a way. We also got to see a bunch of different types of cold plate designs.We also got another look at the Noctua x Antec Flux Pro case, which we previously covered at Antec’s booth.G2 FansNoctua showed off its 120mm G2 fan, which also appears in the shroud top of the Antec Flux Pro case. A couple things have changed about the fan, which include the RPM offset being a little different.Grab a GN Soldering & Project Mat for a high-quality work surface with extreme heat resistance. These purchases directly fund our operation, including our build-out of the hemi-anechoic chamber for our acoustic testing!When we reviewed the NH-D15 G2, the RPM offset between the 2 fans was about 25, but the fans we saw at Computex are about plus or minus 50.  Noctua provided some first-party data and stated that on a 120x49mm water cooler radiator comparing the G2 fan versus the company’s NF-A12x25 fan under a 200W heat-loud, the G2 fan performed roughly 3 degrees cooler, which is really good.    Paired with an air cooler, there was about a 1 degree difference between the 2 fans, which is a lot for an air cooler. Noctua Liquid CoolerFor its liquid cooler, Noctua is working with Asetek, using the company’s Gen 8 V2 platform.  Asetek has been around for a long time and they’re one of the biggest suppliers. In the old days, they worked with Corsair, NZXT, and basically everyone’s stuff.The landscape has diversified a bit. Apaltek has gotten really big as a supplier. For as much s*** we’ve given Asetek over the years, in our experience, they’ve had fewer widespread failures of gunk buildup compared to competing solutions. Noctua MouseWe don’t cover mice, but Noctua also showed off a mouse with a small fan built into it. Noctua's Jakob Dellinger Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We wrapped up our Noctua coverage by interviewing Noctua’s Jakob Dellinger. Make sure to watch our Computex video where we do a deeper dive into the company’s upcoming liquid cooler, how a thermosiphon works, and more. #noctua039s #next #big #thing #liquid
    GAMERSNEXUS.NET
    Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-Dive
    Coolers News Noctua's Next Big Thing: Liquid Cooling and Thermosiphons | Technical Deep-DiveJune 2, 2025Last Updated: 2025-06-02Noctua's Computex 2025 showcase includes engineering and design information on their new Thermosiphon cooler and CPU liquid coolerThe HighlightsNoctua shows off its upcoming AIO liquid coolerThe company also shows off its new NF-A12 G2 fanNoctua also discusses its Antec Flux Pro Noctua Edition PC caseTable of ContentsAutoTOC Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)IntroWe visited Noctua’s booth at Computex, where the company showed off its upcoming liquid cooler, which is set to launch in Q1 2026. Once again, we have to give Noctua an award for least RGB LED BS we’ve seen at a trade show as we couldn’t find any in their booth.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangThe company showed off its AIO liquid cooler prototype, which consisted of 3D printed pieces that are intended to go into the pump block to reduce some of the most annoying aspects of liquid coolers with pumps as opposed to thermosiphons. The fan that goes on top of the pump block is an existing Noctua fan that they’ve reshaped the frame for. And it’s optional to mount on top and it projects the air out towards the memory and VRM components. We also looked at the company’s thermosiphon, which was briefly shown at Computex last year. It is a 2-phase thermosiphon, which means that it does a phase change. This makes it comparable to a heat-pipe in a way. We also got to see a bunch of different types of cold plate designs.We also got another look at the Noctua x Antec Flux Pro case, which we previously covered at Antec’s booth.G2 FansNoctua showed off its 120mm G2 fan, which also appears in the shroud top of the Antec Flux Pro case. A couple things have changed about the fan, which include the RPM offset being a little different.Grab a GN Soldering & Project Mat for a high-quality work surface with extreme heat resistance. These purchases directly fund our operation, including our build-out of the hemi-anechoic chamber for our acoustic testing! (or consider a direct donation or a Patreon contribution!)When we reviewed the NH-D15 G2, the RPM offset between the 2 fans was about 25, but the fans we saw at Computex are about plus or minus 50.  Noctua provided some first-party data and stated that on a 120x49mm water cooler radiator comparing the G2 fan versus the company’s NF-A12x25 fan under a 200W heat-loud, the G2 fan performed roughly 3 degrees cooler, which is really good.    Paired with an air cooler, there was about a 1 degree difference between the 2 fans, which is a lot for an air cooler. Noctua Liquid CoolerFor its liquid cooler, Noctua is working with Asetek, using the company’s Gen 8 V2 platform.  Asetek has been around for a long time and they’re one of the biggest suppliers. In the old days, they worked with Corsair, NZXT, and basically everyone’s stuff.The landscape has diversified a bit. Apaltek has gotten really big as a supplier. For as much s*** we’ve given Asetek over the years, in our experience, they’ve had fewer widespread failures of gunk buildup compared to competing solutions. Noctua MouseWe don’t cover mice, but Noctua also showed off a mouse with a small fan built into it. Noctua's Jakob Dellinger Interview Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We wrapped up our Noctua coverage by interviewing Noctua’s Jakob Dellinger. Make sure to watch our Computex video where we do a deeper dive into the company’s upcoming liquid cooler, how a thermosiphon works, and more.
    0 Yorumlar 0 hisse senetleri
  • Design to Code with the Figma MCP Server

    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code.
    Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky.
    You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size?
    There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally.
    Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design
    Get the MCP server running in CursorSet up a quick target repo
    Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings.
    Go to the Security tab.
    Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{
    "mcpServers": {
    "Framelink Figma MCP": {
    "command": "npx",
    "args":}
    }
    }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface.

    Figma frame: <;

    Please use the Figma MCP server.

    Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */}
    <div className="p-6 bg-white border-t border-">
    <div className="flex items-center space-x-4">
    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/>
    </svg>
    </button>

    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/>
    </svg>
    </button>

    <div className="flex-1 relative">
    <div className="flex items-center bg-rounded-full px-4 py-3">
    <button className="p-1 rounded-full hover:bg-mr-3">
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/>
    </svg>
    </button>

    <input
    type="text"
    value={newMessage}
    onChange={=> setNewMessage}
    onKeyPress={handleKeyPress}
    placeholder="Type a message..."
    className="flex-1 bg-transparent outline-none text-placeholder-"
    />

    <button
    onClick={handleSendMessage}
    className="p-1 rounded-full hover:bg-ml-3"
    >
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/>
    </svg>
    </button>
    </div>
    </div>
    </div>
    </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code.
    Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc.
    Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating.
    Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list
    A CSS style guide
    A frameworkstyle guide
    Test suite rules
    Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right.
    There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority.
    It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent.
    Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    #design #code #with #figma #mcp
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in CursorSet up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args":} } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <; Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-mr-3"> <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={=> setNewMessage} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-placeholder-" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-ml-3" > <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A frameworkstyle guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering! #design #code #with #figma #mcp
    WWW.BUILDER.IO
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocol (MCP) servers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in Cursor (or your client of choice) Set up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP client (Cursor)Now that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args": ["-y", "figma-developer-mcp", "--figma-api-key=YOUR_FIGMA_ACCESS_TOKEN", "--stdio"] } } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layer(s) in Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <https://www.figma.com/design/CPDcrzkVChAzQ3q1pC5mXd/Figma-MCP-vs.-Builder-Fusion?node-id=2-215&t=K6v805pKyoU4FqdA-4> Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-[#E8DEF8]"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-[#ECE6F0] rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-[#D0BCFF] mr-3"> <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={(e) => setNewMessage(e.target.value)} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-[#1D1B20] placeholder-[#4A4459]" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-[#D0BCFF] ml-3" > <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files (with AI's help), and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A framework (i.e., React) style guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PR (with minimal diffs) for you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    0 Yorumlar 0 hisse senetleri
  • How to watch Sinners: See the smash horror hit at home

    Table of Contents

    Table of Contents

    Table of Contents
    What is Sinners about?
    Is Sinners worth watching?
    How to watch Sinners at home
    The best HBO Max streaming deals

    The best streaming deals to watch 'Sinners' at home:

    WATCH NOW

    Buy 'Sinners' on Prime Video

    WATCH NOW

    Rent 'Sinners' on Prime Video

    WATCH LATER

    Maxannual subscription

    /yearWATCH LATER

    Max Standard annual subscription

    /yearWATCH LATER FOR FREE

    Max Basic With Ads for Cricket customers

    Free for Cricket customers on the /month unlimited planWATCH LATER FOR FREE

    Max Basic With Ads

    Free for DashPass annual plan subscribersWATCH LATER

    Max Student

    per month for 12 monthsWATCH LATER

    Disney+, Hulu, and Max

    per month, per monthBlack Panther director Ryan Coogler is back with another smash hit. The third movie Warner Bros. has released in 2025 that features an A-lister playing dual roles, Sinners is "easily one of the best movies of the year," according to Mashable's head movie critic.Besides Michael B. Jordan times two, it stars Hailee Steinfeld, Jack O’Connell, Wunmi Mosaku, Jayme Lawson, Omar Benson Miller, and Delroy Lindo. With bits of horror, history, and musical theater all sprinkled in, it's a genre-fluid movie in every sense of the term. If you haven't caught it in theaters yet, there's still time. However, if you'd rather watch it at home, it's now available on digital-on-demand services as of June 3. Here's everything you need to know about how to watch Sinners at home.

    You May Also Like

    What is Sinners about?Set in the 1930s Jim Crow-era South, Sinners stars Michael B. Jordan in a dual role as Smoke and Stack, twin brothers who return to their hometown with the goal of setting up a juke joint — only for its grand opening to be disrupted by something supernaturally monstrous."There are vampires in the film, but it's really about a lot more than just that. It's one of many elements, and I think we're gonna surprise people with it," director Ryan Coogler explained at a press conference.Check out the official trailer:

    Is Sinners worth watching?Sinners is a huge success story for original horror. It's only the second movie in 2025 to pass the million domestic box office milestone and is one of the 10 highest-grossing horror movies to date. Not only has it been a smash hit at the box office, now climbing to over million worldwide and million domestically, but the reviews are outstanding. It currently holds a near-perfect 97 percent critic rating on Rotten Tomatoes and a 96 percent audience rating. That's no easy feat."Sinners is more than a hell of a thrilling vampire movie. Like Black Panther, it expands beyond the expectations of its genre to become a magnificent film, emanating with spirit, power, and purpose," Mashable's Kristy Puchko writes in her review of the film. "Smoothly blending vampire horror into a unique tale of regret, resilience, and redemption, Coogler and Jordan have made a cinematic marvel that is terrifying, satisfying, and unforgettable."Read our full review of Sinners.How to watch Sinners at home

    Credit: Warner Bros.

    Sinners smashed into theaters on April 18, 2025, and is still floating around in select theaters nationwide. However, if you would rather watch it at home, there are now a couple of different options: purchasing via digital video-on-demand or renting via digital video-on-demand. It will also eventually be streaming, offering a third option.Buy or rent Sinners on digitalAs of June 3, Sinners is available to purchase or rent on digital video-on-demand platforms like Prime Video. You can purchase the movie for your digital collection or rent it for 30 days. If you choose to rent, just note that you'll have 30 days to watch, but only 48 hours to finish once you begin.You can purchase and rent the film at the following retailers:Prime Video — buy for rent for Apple TV — buy for rent for Fandango at Home— buy for rent for Opens in a new window

    Credit: Prime Video

    Rent or buy 'Sinners' at Prime Video

    or Stream Sinners on MaxAs a Warner Bros. Pictures film, we expect that Sinners will make its streaming debut on Max— the Warner Bros.-owned streaming service. While there is no official streaming date yet, we'll be keeping our eyes peeled. Based on the digital-to-streaming trajectory of other recent theatrical hits from Warner Bros. like Companion, Mickey 17, and Beetlejuice Beetlejuice, we expect that Sinners will make its streaming debut sometime around late July to mid-August.Max subscriptions start at per month, but there are a few different ways to save some money on your plan. Check out the best Max streaming deals below.The best HBO Max streaming dealsBest for most people: 16% on Max Basic annual subscription

    Opens in a new window

    Credit: Max

    Max Basic with ads yearly subscription

    per yearThe Max Basic plan with ads typically goes for per month, but if you pay for the entire year up front, that cost drops down to per month. An annual plan is just total, which saves you about 16% compared to the monthly plan.

    Related Stories

    Mashable Deals

    Want more hand-picked deals from our shopping experts?
    Sign up for the Mashable Deals newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Best Max deal with no ads: up to 16% on a Max Standard annual subscription

    Opens in a new window

    Credit: Max

    Max Standard annual subscription

    per yearSimilarly, you can opt for the annual Max Standard or Premium plans and save about 16% if you'd rather go ad-free. The Standard tier costs either per month or per year, while the Premium tier costs either per month or per year. While both tiers offer ad-free viewing, the Premium tier goes a step further with 4K Ultra HD video quality, Dolby Atmos immersive audio, and the ability to download more offline content.Get HBO Max for free: Switch to Cricket's /month unlimited plan

    Opens in a new window

    Credit: Cricket / Max

    MaxFree for Cricket customers on the /month plan

    If you switch your phone plan to Cricket's per month unlimited plan, you'll get HBO Max included for no extra cost. When you open up the HBO Max app, you'll just select Cricket as your provider and use your credentials to log in. That's all, folks.Get HBO Max for free: Sign up for DashPass annual plan

    Opens in a new window

    Credit: DoorDash / Max

    MaxFree with DashPass annual planAnother way to get HBO Max for free in 2025 is by signing up for a DoorDash DashPass annual plan for per year. A DashPass membership gets you delivery fees and reduced service fees on eligible DoorDash orders all year long. You'll just have to activate your HBO Max with ads subscription through your DoorDash account to get started. If you'd rather watch ad-free, you can upgrade for a discounted rate as well.Best HBO Max deal for students: 50% on Max Basic with ads

    Opens in a new window

    Credit: Max

    Max Student

    per month for 12 months

    College students looking to expand their movie horizons can get an entire year of HBO Max with ads for half price. Just verify your student status with UNiDAYS and retrieve the unique discount code to drop the price from to per month.Best bundle deal: Get Max, Disney+, and Hulu for up to 38% off

    Opens in a new window

    Credit: Disney / Hulu / Max

    Disney+, Hulu, and Max

    per month, per monthFor the most bang for your buck, check out the Disney+ bundle deal that includes Disney+, Hulu, and Max for just per month with ads. That lineup of streamers would usually cost you per month, so you'll keep an extra in your pocket monthly.If you'd rather go ad-free, the bundle will run you per month as opposed to That's up to 38% in savings for access to all three streaming libraries.
    #how #watch #sinners #see #smash
    How to watch Sinners: See the smash horror hit at home
    Table of Contents Table of Contents Table of Contents What is Sinners about? Is Sinners worth watching? How to watch Sinners at home The best HBO Max streaming deals The best streaming deals to watch 'Sinners' at home: WATCH NOW Buy 'Sinners' on Prime Video WATCH NOW Rent 'Sinners' on Prime Video WATCH LATER Maxannual subscription /yearWATCH LATER Max Standard annual subscription /yearWATCH LATER FOR FREE Max Basic With Ads for Cricket customers Free for Cricket customers on the /month unlimited planWATCH LATER FOR FREE Max Basic With Ads Free for DashPass annual plan subscribersWATCH LATER Max Student per month for 12 monthsWATCH LATER Disney+, Hulu, and Max per month, per monthBlack Panther director Ryan Coogler is back with another smash hit. The third movie Warner Bros. has released in 2025 that features an A-lister playing dual roles, Sinners is "easily one of the best movies of the year," according to Mashable's head movie critic.Besides Michael B. Jordan times two, it stars Hailee Steinfeld, Jack O’Connell, Wunmi Mosaku, Jayme Lawson, Omar Benson Miller, and Delroy Lindo. With bits of horror, history, and musical theater all sprinkled in, it's a genre-fluid movie in every sense of the term. If you haven't caught it in theaters yet, there's still time. However, if you'd rather watch it at home, it's now available on digital-on-demand services as of June 3. Here's everything you need to know about how to watch Sinners at home. You May Also Like What is Sinners about?Set in the 1930s Jim Crow-era South, Sinners stars Michael B. Jordan in a dual role as Smoke and Stack, twin brothers who return to their hometown with the goal of setting up a juke joint — only for its grand opening to be disrupted by something supernaturally monstrous."There are vampires in the film, but it's really about a lot more than just that. It's one of many elements, and I think we're gonna surprise people with it," director Ryan Coogler explained at a press conference.Check out the official trailer: Is Sinners worth watching?Sinners is a huge success story for original horror. It's only the second movie in 2025 to pass the million domestic box office milestone and is one of the 10 highest-grossing horror movies to date. Not only has it been a smash hit at the box office, now climbing to over million worldwide and million domestically, but the reviews are outstanding. It currently holds a near-perfect 97 percent critic rating on Rotten Tomatoes and a 96 percent audience rating. That's no easy feat."Sinners is more than a hell of a thrilling vampire movie. Like Black Panther, it expands beyond the expectations of its genre to become a magnificent film, emanating with spirit, power, and purpose," Mashable's Kristy Puchko writes in her review of the film. "Smoothly blending vampire horror into a unique tale of regret, resilience, and redemption, Coogler and Jordan have made a cinematic marvel that is terrifying, satisfying, and unforgettable."Read our full review of Sinners.How to watch Sinners at home Credit: Warner Bros. Sinners smashed into theaters on April 18, 2025, and is still floating around in select theaters nationwide. However, if you would rather watch it at home, there are now a couple of different options: purchasing via digital video-on-demand or renting via digital video-on-demand. It will also eventually be streaming, offering a third option.Buy or rent Sinners on digitalAs of June 3, Sinners is available to purchase or rent on digital video-on-demand platforms like Prime Video. You can purchase the movie for your digital collection or rent it for 30 days. If you choose to rent, just note that you'll have 30 days to watch, but only 48 hours to finish once you begin.You can purchase and rent the film at the following retailers:Prime Video — buy for rent for Apple TV — buy for rent for Fandango at Home— buy for rent for Opens in a new window Credit: Prime Video Rent or buy 'Sinners' at Prime Video or Stream Sinners on MaxAs a Warner Bros. Pictures film, we expect that Sinners will make its streaming debut on Max— the Warner Bros.-owned streaming service. While there is no official streaming date yet, we'll be keeping our eyes peeled. Based on the digital-to-streaming trajectory of other recent theatrical hits from Warner Bros. like Companion, Mickey 17, and Beetlejuice Beetlejuice, we expect that Sinners will make its streaming debut sometime around late July to mid-August.Max subscriptions start at per month, but there are a few different ways to save some money on your plan. Check out the best Max streaming deals below.The best HBO Max streaming dealsBest for most people: 16% on Max Basic annual subscription Opens in a new window Credit: Max Max Basic with ads yearly subscription per yearThe Max Basic plan with ads typically goes for per month, but if you pay for the entire year up front, that cost drops down to per month. An annual plan is just total, which saves you about 16% compared to the monthly plan. Related Stories Mashable Deals Want more hand-picked deals from our shopping experts? Sign up for the Mashable Deals newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Best Max deal with no ads: up to 16% on a Max Standard annual subscription Opens in a new window Credit: Max Max Standard annual subscription per yearSimilarly, you can opt for the annual Max Standard or Premium plans and save about 16% if you'd rather go ad-free. The Standard tier costs either per month or per year, while the Premium tier costs either per month or per year. While both tiers offer ad-free viewing, the Premium tier goes a step further with 4K Ultra HD video quality, Dolby Atmos immersive audio, and the ability to download more offline content.Get HBO Max for free: Switch to Cricket's /month unlimited plan Opens in a new window Credit: Cricket / Max MaxFree for Cricket customers on the /month plan If you switch your phone plan to Cricket's per month unlimited plan, you'll get HBO Max included for no extra cost. When you open up the HBO Max app, you'll just select Cricket as your provider and use your credentials to log in. That's all, folks.Get HBO Max for free: Sign up for DashPass annual plan Opens in a new window Credit: DoorDash / Max MaxFree with DashPass annual planAnother way to get HBO Max for free in 2025 is by signing up for a DoorDash DashPass annual plan for per year. A DashPass membership gets you delivery fees and reduced service fees on eligible DoorDash orders all year long. You'll just have to activate your HBO Max with ads subscription through your DoorDash account to get started. If you'd rather watch ad-free, you can upgrade for a discounted rate as well.Best HBO Max deal for students: 50% on Max Basic with ads Opens in a new window Credit: Max Max Student per month for 12 months College students looking to expand their movie horizons can get an entire year of HBO Max with ads for half price. Just verify your student status with UNiDAYS and retrieve the unique discount code to drop the price from to per month.Best bundle deal: Get Max, Disney+, and Hulu for up to 38% off Opens in a new window Credit: Disney / Hulu / Max Disney+, Hulu, and Max per month, per monthFor the most bang for your buck, check out the Disney+ bundle deal that includes Disney+, Hulu, and Max for just per month with ads. That lineup of streamers would usually cost you per month, so you'll keep an extra in your pocket monthly.If you'd rather go ad-free, the bundle will run you per month as opposed to That's up to 38% in savings for access to all three streaming libraries. #how #watch #sinners #see #smash
    MASHABLE.COM
    How to watch Sinners: See the smash horror hit at home
    Table of Contents Table of Contents Table of Contents What is Sinners about? Is Sinners worth watching? How to watch Sinners at home The best HBO Max streaming deals The best streaming deals to watch 'Sinners' at home: WATCH NOW Buy 'Sinners' on Prime Video $24.99 WATCH NOW Rent 'Sinners' on Prime Video $19.99 WATCH LATER Max (With Ads) annual subscription $99.99/year (save $19.89) WATCH LATER Max Standard annual subscription $169.99/year (save $33.89) WATCH LATER FOR FREE Max Basic With Ads for Cricket customers Free for Cricket customers on the $60/month unlimited plan (save $9.99/month) WATCH LATER FOR FREE Max Basic With Ads Free for DashPass annual plan subscribers (save $9.99 per month) WATCH LATER Max Student $4.99 per month for 12 months (save 50%) WATCH LATER Disney+, Hulu, and Max $16.99 per month (with ads), $29.99 per month (no ads) (save up to 38%) Black Panther director Ryan Coogler is back with another smash hit. The third movie Warner Bros. has released in 2025 that features an A-lister playing dual roles, Sinners is "easily one of the best movies of the year," according to Mashable's head movie critic.Besides Michael B. Jordan times two, it stars Hailee Steinfeld (Hawkeye), Jack O’Connell (Ferrari), Wunmi Mosaku (Passenger), Jayme Lawson (The Woman King), Omar Benson Miller (True Lies), and Delroy Lindo (Da 5 Bloods). With bits of horror, history, and musical theater all sprinkled in, it's a genre-fluid movie in every sense of the term. If you haven't caught it in theaters yet, there's still time. However, if you'd rather watch it at home, it's now available on digital-on-demand services as of June 3. Here's everything you need to know about how to watch Sinners at home. You May Also Like What is Sinners about?Set in the 1930s Jim Crow-era South, Sinners stars Michael B. Jordan in a dual role as Smoke and Stack, twin brothers who return to their hometown with the goal of setting up a juke joint — only for its grand opening to be disrupted by something supernaturally monstrous."There are vampires in the film, but it's really about a lot more than just that. It's one of many elements, and I think we're gonna surprise people with it," director Ryan Coogler explained at a press conference.Check out the official trailer: Is Sinners worth watching?Sinners is a huge success story for original horror. It's only the second movie in 2025 to pass the $250 million domestic box office milestone and is one of the 10 highest-grossing horror movies to date. Not only has it been a smash hit at the box office, now climbing to over $338 million worldwide and $258 million domestically, but the reviews are outstanding. It currently holds a near-perfect 97 percent critic rating on Rotten Tomatoes and a 96 percent audience rating. That's no easy feat."Sinners is more than a hell of a thrilling vampire movie. Like Black Panther, it expands beyond the expectations of its genre to become a magnificent film, emanating with spirit, power, and purpose," Mashable's Kristy Puchko writes in her review of the film. "Smoothly blending vampire horror into a unique tale of regret, resilience, and redemption, Coogler and Jordan have made a cinematic marvel that is terrifying, satisfying, and unforgettable."Read our full review of Sinners.How to watch Sinners at home Credit: Warner Bros. Sinners smashed into theaters on April 18, 2025, and is still floating around in select theaters nationwide. However, if you would rather watch it at home, there are now a couple of different options: purchasing via digital video-on-demand or renting via digital video-on-demand. It will also eventually be streaming, offering a third option.Buy or rent Sinners on digitalAs of June 3, Sinners is available to purchase or rent on digital video-on-demand platforms like Prime Video. You can purchase the movie for your digital collection or rent it for 30 days. If you choose to rent, just note that you'll have 30 days to watch, but only 48 hours to finish once you begin.You can purchase and rent the film at the following retailers:Prime Video — buy for $24.99, rent for $19.99Apple TV — buy for $24.99, rent for $19.99Fandango at Home (Vudu) — buy for $24.99, rent for $19.99 Opens in a new window Credit: Prime Video Rent or buy 'Sinners' at Prime Video $19.99 or $24.99 Stream Sinners on MaxAs a Warner Bros. Pictures film, we expect that Sinners will make its streaming debut on Max (soon to be called HBO Max once again) — the Warner Bros.-owned streaming service. While there is no official streaming date yet, we'll be keeping our eyes peeled. Based on the digital-to-streaming trajectory of other recent theatrical hits from Warner Bros. like Companion, Mickey 17, and Beetlejuice Beetlejuice, we expect that Sinners will make its streaming debut sometime around late July to mid-August.Max subscriptions start at $9.99 per month, but there are a few different ways to save some money on your plan. Check out the best Max streaming deals below.The best HBO Max streaming dealsBest for most people: Save 16% on Max Basic annual subscription Opens in a new window Credit: Max Max Basic with ads yearly subscription $99.99 per year (save $19.89) The Max Basic plan with ads typically goes for $9.99 per month, but if you pay for the entire year up front, that cost drops down to $8.33 per month. An annual plan is just $99.99 total, which saves you about 16% compared to the monthly plan. Related Stories Mashable Deals Want more hand-picked deals from our shopping experts? Sign up for the Mashable Deals newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Best Max deal with no ads: Save up to 16% on a Max Standard annual subscription Opens in a new window Credit: Max Max Standard annual subscription $169.99 per year (save $33.89) Similarly, you can opt for the annual Max Standard or Premium plans and save about 16% if you'd rather go ad-free. The Standard tier costs either $16.99 per month or $169.99 per year (about $14.16 per month), while the Premium tier costs either $20.99 per month or $209.99 per year (about $17.50 per month). While both tiers offer ad-free viewing, the Premium tier goes a step further with 4K Ultra HD video quality, Dolby Atmos immersive audio, and the ability to download more offline content.Get HBO Max for free: Switch to Cricket's $60/month unlimited plan Opens in a new window Credit: Cricket / Max Max (with ads) Free for Cricket customers on the $60/month plan If you switch your phone plan to Cricket's $60 per month unlimited plan, you'll get HBO Max included for no extra cost. When you open up the HBO Max app, you'll just select Cricket as your provider and use your credentials to log in. That's all, folks.Get HBO Max for free: Sign up for DashPass annual plan Opens in a new window Credit: DoorDash / Max Max (with ads) Free with DashPass annual plan ($8/month) Another way to get HBO Max for free in 2025 is by signing up for a DoorDash DashPass annual plan for $96 per year ($8 per month). A DashPass membership gets you $0 delivery fees and reduced service fees on eligible DoorDash orders all year long. You'll just have to activate your HBO Max with ads subscription through your DoorDash account to get started. If you'd rather watch ad-free, you can upgrade for a discounted rate as well.Best HBO Max deal for students: Save 50% on Max Basic with ads Opens in a new window Credit: Max Max Student $4.99 per month for 12 months College students looking to expand their movie horizons can get an entire year of HBO Max with ads for half price. Just verify your student status with UNiDAYS and retrieve the unique discount code to drop the price from $9.99 to $4.99 per month.Best bundle deal: Get Max, Disney+, and Hulu for up to 38% off Opens in a new window Credit: Disney / Hulu / Max Disney+, Hulu, and Max $16.99 per month (with ads), $29.99 per month (no ads) For the most bang for your buck, check out the Disney+ bundle deal that includes Disney+, Hulu, and Max for just $16.99 per month with ads. That lineup of streamers would usually cost you $25.97 per month, so you'll keep an extra $9 in your pocket monthly.If you'd rather go ad-free, the bundle will run you $29.99 per month as opposed to $48.97. That's up to 38% in savings for access to all three streaming libraries.
    0 Yorumlar 0 hisse senetleri
Arama Sonuçları