• Marvel Snap Back Online After TikTok Ban, Devs Searching for New Publisher to Stop It Happening Again
    www.ign.com
    Marvel Snap is now back online in the U.S. after it was banned alongside TikTok, and its developer Second Dinner is looking for a new publisher to ensure another takedown doesn't happen in the future.Today, January 21, marks "the start of a new era for Marvel Snap," according to Second Dinner, which posted on X/Twitter to announce the digital trading card game's return alongside the search for a new publisher.TikTok was banned in the U.S. on January 19 over "national security concerns" but was brought back online after around 14 hours. TikTok owner ByteDance owns Marvel Snap publisher Nuverse too, so the game was also taken down but took a little longer to come back online.Marvel Snap Card ArtIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250"Marvel Snap is back online in the U.S," Second Dinner said. "But to make sure this never happens again, were working to bring more services in-house and partner with a new publisher. This is the start of a new era for Marvel Snap."We know this probably leaves you with even more questions than answers. We appreciate your patience, but in the meantime enjoy playing Marvel Snap. Well continue to update with more information as soon as possible."Marvel Snap arrived in 2022 as a fast-paced TCG featuring the likes of Spider-Man, Iron Man, Thor, and many other Marvel superheroes and villains.In our 8/10 review, IGN said: "Marvel Snap packs bold ideas, deep gameplay, a punchy presentation and lots of love for Marvel. Its approach to building a collection and randomness in gameplay won't be for everyone, but it's still well worth playing."Ryan Dinsdale is an IGN freelance reporter. He'll talk about The Witcher all day.
    0 Commentarii ·0 Distribuiri ·42 Views
  • The Times Cameron Diaz Was the Stealth MVP Who Got Overlooked
    www.denofgeek.com
    When we first saw Cameron Diazs film debut in The Mask, audiences had a response not unlike that of Jim Carreys green-skinned protagonist. Okay, maybe we werent quite so aroused, but our eyes did practically pop out of our sockets. And with good reason. After all, Diaz is a tall, striking blonde who looked every bit like the model turned actress she was.Given her high profile beginning, it might sound weird to describe any Cameron Diaz performance as overlooked. Not only does Diaz fit well within traditional leading lady roles, but she has a knack for comic timing and a lack of ego that led to big performances in Theres Something About Mary, The Sweetest Thing, and Annie. Yet there have been several movies in which Diaz puts in admirable, exciting work, only to get overshadowed by her co-stars. No one comes away from My Best Friends Wedding, Being John Malkovich, or even Charlies Angels talking about how Cameron Diaz stole the show. But if they look closer at what shes doing, theyll find a gifted comic actress putting in a nuanced performance, often more complicated than those who got all the praise.My Best Friends WeddingFor the 1997 romantic comedy My Best Friends Wedding, Diaz takes the most unenviable of roles. She plays the other woman, the romantic rival to the movies lead, played here by Julia Roberts at the height of her popularity. Surely, Diazs character will be evaporated in the light of Roberts smile and the audience would hate her.Yet writer Ronald Bass and director P.J. Hogan do just the opposite in My Best Friends Wedding, making Roberts food critic Jules a self-satisfied jerk who wants to ruin the wedding of her best friend Michael (Dermot Mulroney) and his fiance Kimmy (Diaz). The inversion only works because of the complexity that Diaz brings to her nice-girl character. In one of the movies standout scenes, Jules weaponizes her knowledge that Kimmy hates to sing and suffers from stage fright. Jules takes Michael and Kimmy to a karaoke bar, and while the former cant control his excitement over getting the chance to sing with his best friend again, the camera stays on the latter after Jules and Michael rush off. We see Kimmy frozen in fear, a fear that continues even as she tries to get over her anxiety and support Michael in his fun.The usually indomitable Diaz has never looked smaller than when a microphone gets shoved in front of her and shes forced to sing the Bacharach and David number, I Just Dont Know What to Do with Myself. She shrinks even more after reaction shots of Jules. Diaz, like us, spots the slight grin on Roberts face that her plan is working. She also sees Michael slightly disgusted that his bride-to-be cannot share in an activity he loves.And yet, Kimmy soldiers on, singing over shouts of you suck through fumbled lines and flat notes. The commitment eventually wins over the crowd, and they begin cheering for each other even her off-key moments. And the next time the camera cuts to Jules, it pans over to Michael, looking now at Kimmy not with disgust, but with awe, fully impressed at the womans bravery and commitment.As funny and wonderful as the moment is, it works not just because we see the bitter Jules hoisted on her own petard. It works because Diaz doesnt overplay Kimmys hand. Even when shes won back Michaels affection and the crowds support, Diaz doesnt let Kimmy become her: a striking, beautiful, confident woman. She stays in character, and lets Kimmys face go flush, lets her sometimes hunch in embarrassment, lets a little bit of the anxiety remain in the corners of her smile.Its an incredibly complex supporting performance, one that never overshadows the excellent work that Roberts is doing with her complex Jules. In fact, it enhances Jules as a character by making Kimmy a real and vulnerable person, not just an unbeatable sweetheart.Being John MalkovichCameron Diaz is a consummate romantic comedy star, and she is always great in them. Unfortunately, some of her most ineffective performances have come when she attempts different genres, none worse than her disastrous turn in Martin Scorseses Gangs of New York.For that reason, people often disregard her appearance in the 1999 Spike Jonze art film, Being John Malkovich. John Cusack gets praise for committing to the part of Craig Schwartz, a bitter puppeteer who finds a portal into the mind of actor John Malkovich. Catherine Keener gets praise for her steely take on the object of affection for Craigs affection Maxine Lund, who uses Craig to get access to the portal. And of course Malkovich gets praise for playing himself as an air-headed, arrogant actor.Join our mailing listGet the best of Den of Geek delivered right to your inbox!None of those accolades make their way to Diaz. Diaz plays Craigs wife Lotte, a pet-lover disdained by her husband. With her mussed hair and shapeless grey sweatshirts, Diaz too often gets dismissed as one more attractive actor trying for superficial gravitas by playing ugly, unconvincingly. But Lotte is much more than the frumpy woman at home. When Lotte enters the portal into Malkovich, its not the allure of power or fame that impresses her. Its the feeling of being in the right body, its the feeling of being identified as a man.Lotte emerges from the exit from Malkovichs mind (near the New Jersey Turnpike) gleeful and childlike, jumping up and down in euphoria. As she rides away in the car with Craig, Diaz keeps the expression of confusion and joy on her face. Lotte has finally found her place.The contrast between Diazs performances as Lotte before and after the Malkovich revelation help make sense of her part in the movies first act. What so many dismissed as nothing more than a pretty actor trying to look not-pretty was, in fact, Diaz playing a trans man still identifying as a woman. The flatness and falseness in her behaviors were not mistakes but correct choices for the character. They relayed the truth of Lotte as a person who could not be who she actually was.Charlies AngelsUnlike Being John Malkovich, 2000s Charlies Angels seems like the ideal Diaz vehicle. A big flashy update of a cheesy show from the 1970s, directed with TV commercial slickness by McG, Charlies Angels was all about pretty ladies doing action sequences in tight, low-rise pants.To be sure, thats exactly what Diaz delivers as team leader Natalie Cook. Sure, Natalie might have a PhD from MIT, and she might test fighter jets for the U.S. Navy, but that doesnt prevent her from driving a speed boat in a gold bikini or disguising herself as a bearded man next to her partner Alexs (Lucy Liu) vampish corporate trainer.Charlies Angels has no qualms about being all about eye candy and over-the-top stunts, but its generally the side characters who get the attention. While new revelations of Bill Murrays bad behavior get released to this day, his deadpan approach to handler Bosley remains a fan favorite. Clips of Sam Rockwell and Crispin Glover in supporting turns make the rounds on social media whenever someone finds out that such idiosyncratic actors showed up in such a nakedly commercial film. Even among the leads, Drew Barrymore and Liu get singled out while Diaz gets taken for granted.But of the three Angels, Diaz best embodies the movies aesthetic. She leans into the sexiness, appearing in almost every scene with leather pants and tops designed for maximum cleavage. And she embraces the cheese, tripping and falling, letting herself be laughed at as a nerd onscreen and to show off that shes part of the joke. She even excels in the conspicuously fake action sequences, allowing just enough of a smirk when she performs Matrix-style kung fu in a climactic battle with a villain played by Kelly Lynch.Watching Charlies Angels feels like a watching a car accident waiting to happen. At any second, the movie could veer too far into the male gaze and make the movie feel leery and gross. The action sequences fall just on the right side of cheap, as if the poor digital effects are a choice instead of the result of incompetence and age. The rambling plot could become confusing and annoying instead of a solid scaffolding to hang the movies showcase scenes upon.The thing holding it all together is Diaz, who understands the movies tone and commits to it. If she ever looked a bit too uncomfortable, if she ever gave less than 100 percent in a fight sequence, then the entire thing would fall apart. Like Cyclops, Leonardo, and other great pop culture leaders, Natalie Cook gets dismissed as boring when shes in fact reliable, all thanks to Diaz.
    0 Commentarii ·0 Distribuiri ·42 Views
  • TikTok ban suspended for 75 days, but Trump order may not be legal
    9to5mac.com
    The TikTok ban which came into effect on Sunday has been suspended for 75 days by an executive order signed by President Trump on his inauguration day. He has also said that US companies who provide services to TikTok during this time will not be prosecuted.However, legal scholars note that Trumps order does not appear to comply with the law, and say that companies who make TikTok available remain liable for hundreds of billions of dollars of fines, so Apple is unlikely to return the app to the App Store A four-day roller-coaster rideOn Friday, the US Supreme Courtupheld the law imposing a ban on TikTokin the US. That meant the ban would take effect on January 19.The Biden administrationissued a statementsaying that it didnt intend to try to enforce the law in its last 24 hours in the White House, and it would therefore be a matter for Trump to decide. Trump initially said he needed time to consider the matter, andTikTok went offline in the USon Sunday.Apple issued a statementsaying it was obliged to follow the law by removing the app from the App Store.Later on Sunday, Trump made a social media post saying that he would suspend the ban as soon as he took office the next day, and that there would be no liability for ignoring the law in the meantime. Access to the app was restored by Bytedance, as US host Oracle apparently decided it could trust Trumps promise on liability.However, Apple and other app store providers were unwilling to return the app to their app stores on the basis of a social media post. TikTok ban suspended for 75 daysTrump has signed an executive order suspending the ban for 75 days, and stating that US companies who support TikToks return will be shielded from liability.As NPR notes, the law allows for the TikTok ban to be suspended for up to 90 days, provided that there is evidence of a deal in progress.The law does allow one exception: TikTok can continue to operate if Trump certifies to Congress that significant progress has been made toward TikTok breaking away from ByteDances ownership.The law requires that Trump show Congress there are legally binding agreements in motion over ownership changes at TikTok.Executive order likely invalidTrump has not provided any evidence that the requirements of the law have been met, so lawyers continue to express doubt that the promised liability shield would have any effect.Given this, it seems unlikely that Apple, Google, Amazon, and Microsoft will risk restoring TikTok to their app stores until the legal requirements are met.China slightly softens its stanceThe Chinese government has so far rejected the idea of any sale to a US company, but a senior official yesterday suggested that it might be softening its stance.Trump had suggested that perhaps a 50/50 joint venture between China and the US might be a solution. Reuters asked for a response to this, and was given a non-committal answer.We hope the US will earnestly listen to the voice of reason and provide an open, fair, just and non-discriminatory business environment for market entities from all countries. When it comes to actions such as the operation and acquisition of businesses, we believe they should be independently decided by companies in accordance with market principles. If it involves Chinese companies, Chinas laws and regulations should be observed.While thats not exactly a ringing endorsement of the idea, it is a change from the previous outright rejection.TikTok is back in the US, but its not on the App Store: Heres what you need to knowPhoto byTabrez SyedonUnsplashAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentarii ·0 Distribuiri ·41 Views
  • Apple @ Work Podcast: The Mac @ Work report
    9to5mac.com
    Skip to main contentApple @ Work PodcastApple @ Work Podcast: The Mac @ Work report Bradley C|Jan 21 2025 - 3:00 am PTApple @ Work is exclusively brought to you by Mosyle,the only Apple Unified Platform. Mosyle is the only solution that integrates in a single professional-grade platform all the solutions necessary to seamlessly and automatically deploy, manage & protect Apple devices at work. Over 45,000 organizations trust Mosyle to make millions of Apple devices work-ready with no effort and at an affordable cost.Request your EXTENDED TRIALtoday and understand why Mosyle is everything you need to work with Apple.In this episode of Apple @ Work, I talk with Tim Lydon from Wipro about their recent Mac @ Work report.Connect with BradleyListen and subscribeListen to Past EpisodesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel Featuredfrom 9to5Mac9to5Mac Logo Manage push notificationsAllPost
    0 Commentarii ·0 Distribuiri ·42 Views
  • Ex-CIA Analyst Pleads Guilty to Sharing Top-Secret Data with Unauthorized Parties
    thehackernews.com
    Jan 21, 2025Ravie LakshmananCyber Espionage / SurveillanceA former analyst working for the U.S. Central Intelligence Agency (CIA) pleaded guilty to transmitting top secret National Defense Information (NDI) to individuals who did not have the necessary authorization to receive it and attempted to cover up the activity.Asif William Rahman, 34, of Vienna, was an employee of the CIA since 2016 and had a Top Secret security clearance with access to Sensitive Compartmented Information (SCI). He was charged with two counts of unlawfully transmitting NDI in November 2024 following his arrest.He has pleaded guilty to two counts of willful retention and transmission of classified information related to the national defense. He is expected to be sentenced on May 15, 2025, potentially facing a maximum penalty of 10 years in prison.According to court filings, Rahman is alleged to have retained without authorization two documents classified as Top Secret on or about October 17, 2024, and delivered it to multiple individuals who were not entitled to receive it."In the spring of 2024, the defendant accessed and printed from his workstation approximately five documents. These documents were classified at the Secret and Top Secret level," court documents filed on January 17, 2025, reveal. "The defendant transported those materials outside of his place of employment and to his residence by concealing those materials inside a backpack.""From his residence in the Eastern District of Virginia, the defendant reproduced the documents and, while doing so, altered them in an effort to conceal their source and his activity. The defendant then communicated Top Secret information he learned in the course of his employment to multiple individuals he knew were not entitled to receive it. He also transmitted the reproductions of the Secret and Top Secret documents to multiple individuals he knew were not entitled to receive them."Rahman is also said to have shared an additional 10 documents classified at the Top Secret level in a similar manner in the fall of 2024. Then on October 17, he printed two more Top Secret documents pertaining to a United States ally and its planned kinetic actions against a foreign adversary.The defendant then proceeded to photograph these documents from his residence and used a computer program to edit the images. The documents were then shared with unspecified people who were not supposed to them. These individuals are believed to have shared the information with others, eventually causing the documents to appear on several social media platforms on October 18.While the names of the countries were not disclosed, multiple reports from Axios and CNN revealed around that time that the release was linked to Israel's plans to attack Iran. The documents, prepared by the National Geospatial-Intelligence Agency and the National Security Agency, were posted on Telegram by an account called Middle East Spectator.Rahman has also been accused of deleting the files and altering journal entries and written work products on his personal electronic devices in an effort to conceal his personal opinions on U.S. policy. He further drafted entries to paint a false, seemingly benign narrative regarding his deletion of records on his personal device and the CIA workstation."Government employees who are granted security clearances and given access to our nation's classified information must promise to protect it," said Executive Assistant Director Robert Wells of the Federal Bureau of Investigation's National Security Branch. "Rahman blatantly violated that pledge and took multiple steps to hide his actions."Philippines Arrests Chinese National and 2 Filipinos for EspionageThe development comes as the Philippines' National Bureau of Investigation (NBI) disclosed the arrest of a Chinese national and two Filipino citizens suspected of conducting surveillance on critical infrastructure facilities for over a month.The three individuals, Deng Yuanqing, Ronel Jojo Balundo Besa and Jayson Amado Fernandez, are part of a group consisting of six members who engage in surveillance operations by unlawfully obtaining sensitive information related to national defense. The remaining three members, two hardware engineers and a financier (aka Wang), are currently in China, the agency added.Deng, per the NBI, is a software engineer with specialization in automation and control engineering and is allegedly affiliated with the PLA University Science and Technology, a Nanjing-based academic institution under the control of China's People's Liberation Army (PLA).The investigation also uncovered that a white vehicle was procured and fitted with information and communications technology (ICT) equipment so as to facilitate the Intelligence, Surveillance, and Reconnaissance (ISR) operation."From December 13, 2024 to January 16, 2025, subject vehicle was monitored traversing to and fro the National Capital Region and the general divisions of Luzon, conducting detailed scouting, collating comprehensive image of the terrains and structures and the over-all topography of the potential targets, without consent and authority from the Philippine Government," the NBI said.The agency also noted that an onsite search led to the discovery of a Chinese character user account with device ID 918 452 619 controlling the computer system inside the subject vehicle, such as the portable keyboard, files, and cameras.The Philippines has been a target of several Chinese threat actors in recent years, primarily driven by geopolitical tensions in Southeast Asia over ongoing territorial disputes in the South China Sea.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentarii ·0 Distribuiri ·42 Views
  • How to Persuade an AI-Reluctant Board to Embrace Critical Change
    www.informationweek.com
    Kip Havel, Chief Marketing Officer, DexianJanuary 21, 20254 Min ReadRawpixel Ltd via Alamy StockAs an IT leader, youre no stranger to helping executives decipher and understand groundbreaking technology. The process usually takes persistence, careful abstraction, and a stockpile of success stories to make a persuasive business case. With luck, you eventually persuade the board of the value of your next significant IT initiative.But selling the board on AI implementation is another challenge altogether.Its not surprising that many boards are undecided about AI. A recent Deloitte study on AI governance found that Board members rarely get involved with AI:14% discuss AI at every meeting25% discuss AI twice a year16% discuss AI once a year45% never discuss AI at allOnly 2% of respondents considered board members highly knowledgeable or experienced in AI. These circumstances present a serious hurdle as IT teams not only try to implement AI solutions but also strive to build the appropriate guardrails into the AI strategy.Helping the board understand the power of black sky thinking can help to counteract some of their reservations about pursuing AI. Heres what you need to know:Black Sky Thinking Offers a New Approach to InnovationArtificial intelligence is taking enterprises to a place where no man has gone before. Even though the market is starting to define AI norms, establish regulations, determine the technologys shortcomings, and pinpoint when we need a human in the loop, were collectively flying through unfamiliar skies. As a result, IT leaders need to persuade the board of directors to embrace a more transformative way of solving problems. Enter black sky thinking.Related:The black sky thinking concept emerged during the 1960s space race and was then popularized by Rachel Armstrong, author and futurist, at the FutureFest in London in 2014 as she described the mentality necessary for humans to thrive on the cusp of unparalleled disruption.In a follow-up essay, she explains the difference between blue sky thinking (where were at now) and black sky thinking this way:Blue sky thinking is a way of innovating by pushing at the limits of possibility in existing practices.Black sky thinking is more aspirational, producing new kinds of future that enable us to move into uncharted realms with creative confidence.Rather than being constrained by current paradigms, organizations boards and leaders need to envision the future they want and reverse engineer the steps necessary to reach the desired destination. Its like planning for oceanic voyages or trips to the moon but at a societal level.Related:You might be saying, Thats great, but how does it apply to convincing the board to embrace AI use cases? Before you can unlock the power of AI, you need board members to shift from blue sky to black sky thinking and embrace aspirational, limitless potential.Leadership Is on Board with Black Sky Thinking: Now What?Even when theyre onboard with black sky thinking, most board members are going to focus on mitigating risk and maximizing profits for shareholders and the corporation. Thats a fine strategy if youre trying to maintain stasis, but not if youre attempting to break barriers and drive innovation. Your next goal is to convince the board that AI is an acceptable investment if theyre going to achieve their black sky-driven goals.Fortunately, you can increase the success of your petition by getting two key board members on your side: the CEO and general counsel.The CEO is often an easier sell. KPMG surveys indicate 64% of CEOs treat AI as a top investment priority. Since your goals align, the CEO can be a co-champion, providing profiles on each board member and answering these key questions:Which specific industry AI use cases will be the most persuasive?Related:Will AI examples from Fortune 500s carry the most weight?Which biases will you need to combat in your argument?When it comes to in-house counsel, you need to demonstrate a strong command of the legal and ethical implications of what youre proposing. General counsel and CFOs, being naturally risk-averse, require you to come prepared with your:Recognition of potential risksAwareness of pending legal casesCommitment to ethical implementationWith your CEO and general counsel as AI champions, your next step is to demonstrate ROI if the board is going to approve investment in AI. Showcasing results from programs that have already yielded measurable success can reduce barriers to an AI-forward mentality. For example, in healthcare, Kaiser Permanente has demonstrated how AI can save clinicians an hour of documentation daily -- a powerful use case to highlight.Ultimately, youll need to show them that the risk of doing nothing at all can be just as catastrophic as taking a big gamble on emerging technology. Tailored pitches to board members, both individually and collectively, can embolden them to step out of their comfort zones. This approach encourages the embrace of unconventional -- or even unknown -- solutions to complex challenges. When everyone embraces black sky thinking, no horizon is completely out of reach.About the AuthorKip HavelChief Marketing Officer, DexianKip Havel is the chief marketing officer of Dexian, forging strategies that bridge the gap between the brand and its diverse audiences. Passionate about collaboration and black sky thinking, his vision and execution have strengthened company partnerships and grown Dexians footprint in the market. He led the creation of the Dexian brand and has earned honors such as the American Marketing Associations4 Under 40 and PR Weeks Rising Star. A University of Miami alumnus, Kip has held senior marketing roles at Aflac, Randstad US, Cross Country Healthcare, and SFN Group.See more from Kip HavelNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Commentarii ·0 Distribuiri ·42 Views
  • Green light for Sheppard Robsons Bristol office refurb
    www.bdonline.co.uk
    Early 2000s building to be wrapped in red cladding in nod to Bristols Byzantine revival architectureSheppard Robson's designs for the refurbishment include three new storeys and a red cladding inspired by Bristol's Byzantine Revival buildings1/4show captionSheppard Robson has been given the green light to refurbish and extend an early 2000s office building in the centre of Bristol.Designed for a joint venture between Ardstone Capital and CBRE Investment Management, the scheme will retain nearly 90% of the existing building while adding three storeys to its roof.The existing structure was completed in 2002 and is located on a prominent site on Temple Way, close to Bristol Temple Meads station.Sheppard Robsons plans will see its facade stripped back and replaced with a cladding of vertical fins with a dark red colour, intended as a nod to the Bristol Byzantine architectural style popular in the city in the late 19th century.Space between two wings of the building will also be infilled to create larger and more flexible floor plates, which will be orientated around a new central core.Extensions to the north will form a two-story colonnade at ground floor, framing a new double-height reception which has been reoriented towards Temple Quarter.The three new storeys at roof level will replace the buildings current top floor plate and a plant enclosure, stepping back as the building rises to create a series of planted terraces wrapping around the top of the building.The existing building was completed in 2002Mark Kowal, partner at Sheppard Robson, said the project was an example of the retention of late 20th century buildings which would until recently have faced the pressure of demolition.Our design reimagines this outmoded building into a workplace that is aligned with the requirements of modern tenants and their sustainability aspirations, he said.The transformative nature of the project is balanced with resourcefulness. We have retained as much as we possibly can whilst using bold architectural ideas to signal the arrival of a major new development and public spaces for Bristol.Energy will be supplied through a district heating network, with photovoltaics contributing to around 30-40% of the buildings electricity use depending on how its operated and occupied.
    0 Commentarii ·0 Distribuiri ·41 Views
  • Best Internet Providers in Georgia
    www.cnet.com
    Georgia may not have plenty of options for internet providers, but we've found the best ones in the Peach State.
    0 Commentarii ·0 Distribuiri ·40 Views
  • EA Origin app shuts down in April, will not be missed
    www.eurogamer.net
    EA Origin app shuts down in April, will not be missedLack of Microsoft support final nail in coffin.Image credit: EA / Eurogamer News by Tom Phillips Editor-in-Chief Published on Jan. 21, 2025 Origin, EA's universally-disliked PC storefront and launcher launched back in 2011, will finally shut down this year, on 17th April.EA has, of course, long since replaced Origin with the EA App, which isn't much better. From mid-April, any PC users still using Origin will need to switch in order to continue accessing their library of games and gameplay data.In a statement on EA's support website, the publisher states that it is finally killing off Origin as "Microsoft has stopped supporting 32-bit software". "If you use Origin, you need to upgrade to the EA app, which requires a 64-bit version of Windows," EA notes.If you have a relatively new PC with a 64-bit version of Windows, downloading the EA App now and ensuring your game saves are ported over is probably a good idea. Cloud saves should carry across automatically, though games without cloud support will need manual save migration. Don't have a 64-bit PC? Then it's bad news. "To run a 64-bit version of Windows, make sure your PC has a 64-bit-capable processor," EA helpfully states. "If your PC doesnt have one, you'll need to use a newer computer to play your games on the EA app."EA typically also now launches its PC games on Steam, of course, including last year's Dragon Age: The Veilguard and EA Sports FC 25.
    0 Commentarii ·0 Distribuiri ·41 Views
  • The Nvidia AI interview: Inside DLSS 4 and machine learning with Bryan Catanzaro
    www.eurogamer.net
    The Nvidia AI interview: Inside DLSS 4 and machine learning with Bryan CatanzaroDF talks with Nvidia's VP of applied deep learning research.Image credit: Digital Foundry Interview by Alex Battaglia Video Producer, Digital Foundry Additional contributions byWill JuddPublished on Jan. 21, 2025 At CES 2025, Nvidia announced its RTX 50-series graphics cards with DLSS 4. While at the show, we spoke with Nvidia VP of applied deep learning research Bryan Catanzaro about the finer details of how the new DLSS works, from its revised transformer model for super resolution and ray reconstruction to the new multi frame generation (MFG) feature. Despite coming just over a year since our last interview with Bryan, which coincided with the release of DLSS 3.5 and Cyberpunk 2077 Phantom Liberty, there are some fairly major advancements here, some of which that will be reserved for RTX 50-series owners and others that will be available for a wider range of Nvidia graphics cards. The interview follows below, with light edits for length and clarity as usual. The full interview is available via the video embed below if you prefer. Enjoy! Here's the full video interview with Bryan and Alex from the CES 2025 show floor. Watch on YouTube00:00 Introduction00:48 Why switch from CNNs to transformers?02:08 What are some image characteristics that are improved with DLSS 4 Super Resolution?03:17 Is there headroom to continue to improve on Super Resolution?04:12 How much more expensive is DLSS 4 Super Resolution to run?05:25 How does the transformer model improve Ray Reconstruction?09:43 Why is frame gen no longer using hardware optical flow?13:06 Could the new Frame Generation run on RTX 3000?13:44 What has changed for frame pacing with DLSS 4 Frame Generation?15:37 Will Frame Generation ever support standard v-sync?17:18 Could you explain how Reflex 2 works?21:11 What is the lowest acceptable input frame-rate for DLSS 4 Frame Generation?22:13 What does the future of real-time graphics look like?The last time we talked was when ray reconstruction first came out, and now, with RTX 5000, there's a new DLSS model - the first time since 2020 that we're seeing such a big change in how things are done. So why switch over to this new transformer model? To start, how does it improve super resolution specifically? Bryan Catanzaro: We've been evolving the super resolution model now for about five or six years, and it gets increasingly challenging to make the model smarter; trying to cram more and more intelligence into the same space. You have to innovate; you have to try something new. The transformer architecture has been such a wonderful thing for language modeling, for image generation; all of the advances that that we see today like ChatGPT or Stable Diffusion - these are all built on transformer models. Transformer models have this great property in that they're very scalable. You can train them on large amounts of data, and because they're able to direct attention around an image, it allows the model to make smarter choices about what's happening and what to generate. We can train it on much more data, get a smarter model and then breakthrough results. We're really excited about the kinds of image quality that we're able to achieve with our new ray reconstruction and super resolution models in DLSS 4. What are some key image characteristics that are improved with the new transform model in the super resolution mode? Bryan Catanzaro: You know what the issues are with super resolution - it's things like stability, ghosting and detail. We're always trying to push on all of those dimensions, and they usually trade off. It's easier to get more detail if you accumulate more, but then that leads to ghosting. Or the opposite of ghosting, when you have stability problems because the model dmakes different choices each frame and then you have something like geometry in the distance that's shimmering and flickering which is also really bad. Those are the standard problems with any sort of image reconstruction. I think that the tradeoffs we're making with our new super resolution and ray reconstruction models are just way better than what we've had in the past.Here's our DF Direct discussing the Nvidia news, featuring Alex and Oliver. Watch on YouTubeIs there better potential with this kind of model also? With the old models, it seems like we're hitting a wall in terms of the quality that can be achieved. Is there a better trajectory with a transformer model? Bryan Catanzaro: Yeah, absolutely. It's always been true in machine learning that a bigger model trained on more data is going to get better results if the data is high quality. And of course, with DLSS or any sort of real-time graphics algorithm, we have a strict compute budget in terms of milliseconds per frame. One of the reasons we were brave enough to try building a transformer-based image reconstruction algorithm for super resolution and ray reconstruction is because we knew that Blackwell [RTX 50-series] was going to have amazing Tensor cores. It was designed as a neural rendering GPU; the amount of compute horsepower that's going into the Tensor cores is going up exponentially. And so we have the opportunity to try something a little bit more ambitious, and that's what we've done. The specific performance cost of super resolution at 4K on an RTX 4090 was sub-0.5ms, if I recall correctly. Can you give me a ballpark difference in terms of milliseconds per frame for what the new transformer model costs? Bryan Catanzaro: The new super resolution model has four times more compute in it than the old one, but it doesn't take four times as long to execute, especially on Blackwell, because we have designed the algorithm along with the Tensor core to make sure that we're running at really high efficiencies. I can't quote the exact number of milliseconds on a 50-series card, but I can say that it's got four times more compute. And on Blackwell, we think it's the best way to play. The last time we talked, it was really obvious to see that ray reconstruction was the direction that the industry should go in, because you can't just hand-tune a denoiser for every single environmental setting. It made sense, but we noticed problem points in the beginning, both specific to certain titles and more universal ones. How is the transformer model improving these specific areas? Bryan Catanzaro: Some of it's just polish - we've had another year to iterate on it, and we're always increasing the quality of our data sets. We're analysing failure cases, adding them to our training sets and our evaluation methodology. But also, the new model being much bigger and having much more compute in it just gives it more capacity to learn. A lot of times when we have a failure in one of these DLSS models, it looks like shimmering, ghosting or blurring in-game. We consider those model failures; the model is just making a poor choice. It needs to, for example, decide not to accumulate if that's going to lead to ghosting. It needs to, for example, not have a bias to make crenelated stair-step patterns on edges, because that's the whole point of anti-aliasing. Due to a lot of technical reasons, we've been fighting that in DLSS for years, and I think these models are just smarter, so they fail less. Here's the DLSS 4 first look video Alex and Bryan refer to during the interview.Yeah, that was one of my key takeaways about DLSS 4. Sometimes with AI there's a slight stylisation of the output, and I didn't see that at all [in the DLSS 4 b-roll Rich recorded], so I was very happy to see that. Bryan Catanzaro: I noticed [in the Digital Foundry video] that Rich was looking at animated textures, which have always really bothered me too. And it's a really tricky thing for DLSS super resolution or ray reconstruction to deal with, because the motion vectors from the game that are describing how things are moving around don't go along with the texture. The TV is just sitting there, and yet you don't want the screen on the TV to just blur as stuff moves around. That requires the model to ignore the motion vectors that are coming from the game, basically analyse the scene and recognise "oh, this area is actually a TV with an animated texture on it - I'm going to make sure not to blur that." It was really hard to teach the prior CNN models about that. We did our best, and we did make a lot of progress, but I feel like this new transformer model opens up a new space for us to solve these problems. I hope we get to do a dedicated look at ray reconstruction. Because it was so nascent a technology; it feels like this is almost a larger leap than what we're seeing with super resolution. Bryan Catanzaro: I think that's true. Another part of this is frame gen, which now doesn't use hardware optical flow as it did on RTX 40-series, why make that change? Bryan Catanzaro: Well, because we get better results that way. Technology is always a function of the time in which it's built. When we built DLSS 3 frame generation, we absolutely needed hardware acceleration to compute optical flow as we didn't have enough Tensor cores and we didn't have a real-time optical flow algorithm that ran on Tensor cores that could fit our compute budget. So we instead used the optical flow accelerator, which Nvidia had been building for years as an evolution of our video encoder technology and our automotive computer vision acceleration for self driving cars and and so. The difficult part about any sort of hardware implementation of an algorithm like optical flow is that it's really difficult to improve it; it is what it is. The failures that arose from that hardware optical flow couldn't be undone with a smarter neural network, so we decided to just replace them with a fully AI-based solution, which is what we've done for frame generation in DLSS 4. This new frame generation algorithm is significantly more Tensor core heavy, and so it still has a lot of hardware requirements, but it has a few good properties. One is it uses less memory, which is important as we're always trying to save every megabyte. Two is it has better image quality, and that's especially important for the 50-series MFG, because the percentage of time that a gamer is looking at generated frames is much higher and therefore any artefacts are going to be much more visible. So we needed to make image quality better. Three is we needed to make the algorithm cheaper to run in terms of milliseconds, especially for the 50-series cards when we're doing MFG. What we wanted to do was make it possible to amortise a lot of the work over the multiple frames that we're generating. If you think about it, there's really two rendered frames that we're analysing in order to create a series of frames in between those. And it seems like you should do that comparison once, and then you should do some other thing to generate each frame. And so that required a different algorithm. To see this content please enable targeting cookies. Now that frame generation is running wholly on Tensor cores, obviously it's more intensive, but what's keeping it from running on RTX 3000? Bryan Catanzaro: I think this is a question of optimisation, engineering and user experience. We're launching this multi frame generation with the 50-series, and we'll see what we're able to squeeze out of older hardware in the future. Another part of this is frame pacing, which has always actually been an extreme challenge, especially in a VRR scenario. What has changed with regards to frame pacing, between DLSS 3 frame generation and DLSS 4 frame generation? Bryan Catanzaro: We have an updated flip metering system in Blackwell that has much lower variability and takes the CPU out of the equation when deciding exactly when to present a frame. Because of that, we're able to reduce the displayed frame time variability by about a factor of five or 10 compared with our previous best frame pacing. This is especially important for multi frame generation, because the more frames you're trying to show, the more the variability really starts throwing a wrench into the experience. I'm very curious to see if those frame pacing improvements would affect, for example, RTX 40-series as well? Bryan Catanzaro: DLSS 4 is just better than DLSS 3, so I expect that things will be better on 40-series as well. Another element of Nvidia's frame generation is using Reflex to reduce latency, which now has a generative AI aspect to it with Reflex 2. Can you talk a bit about it? Bryan Catanzaro: I'm always thinking about real-time graphics in three dimensions; smoothness, responsiveness and image quality - which includes ray tracing and higher resolution and better textures and all that. With DLSS, we want to improve on all those areas. We're excited about Reflex 2 because it's a new way of thinking about lowering latency. What we're doing is actually rendering the scene in the normal way, but right before we go to finalise the image, we sample the camera position again to see if the user has moved the camera while the GPU has been rendering that frame. If that happens, we warp the image to the new camera position. For most pixels, that's going to look really good and it dramatically lowers the latency between the mouse and the camera. Sometimes when the camera moves, something that was hidden before is revealed, and you would then have a hole with no information on what should be there: disocclusion. The trick with a technique like Reflex 2 is filling in those holes to make a convincing-looking image? And the trade-offs that that we've made with Reflex 2 are going to be really exciting for gamers that are really latency sensitive. I think there's still more work to do to make the image quality even better, and you can imagine that AI has a big role to play here as well. Yeah, it's interesting too, because input latency is a matter of perception, and this is completely playing with that. On a technical level, it's not actually moving the real 3D scene - it's a 2D image manipulation, right? But you're almost getting the same effect. Bryan Catanzaro: It's pretty fun to me. It feels totally different playing a game with Reflex 2, it just feels so much more connected. I think a lot of gamers are going to love it, especially in certain titles that are very latency sensitive. But you know, DLSS is trying to give people more options so they can play how they want = if they want to lower latency, if they want to increase image quality, if they want smoothness. DLSS has something for everybody. The ability to choose two, three or four inserted frames with frame generation. Bryan Catanzaro: Yeah, it's a big deal, and you can do that in the Nvidia app as well, which is useful to override games that were developed with DLSS 3 frame generation and don't have a UI for selecting 2x, 3x or 4x frame generation. Rather than trying to update all the UIs for all the games, we figured it would be useful for gamers to be able to choose what they'd like. Coming onto multi frame generation, what is the lowest acceptable input frame-rate for MFG? Bryan Catanzaro: I think that the acceptable input frame rate is still about the same for 3x or 4x as it was for 2x. I think the challenges really have to do with how large the movement is between two consecutive rendered frames. When the movement gets very large, it becomes much harder to figure out what to do in between those frames. But if you understand how an object is moving, dividing the motion into smaller pieces isn't actually that tricky, right? So the trick is figuring out how the objects are moving, and so that's kind of independent of how many frames we're generating. Where do you see the future of frame generation? Now we're taking whatever kind of raw performance we can get it and blowing it up for a minor performance and latency cost, but eventually we're going to have 1000Hz monitors. Where does frame generation fit into that future? Bryan Catanzaro: Well, I'm excited about 1000Hz monitors. I think that's going to feel amazing - and we're going to be using a lot of frame gen to get to 1000Hz. Graphics is shifting; we've been on this journey of redefining graphics with neural rendering for almost seven years and we're still at the beginning. If we think about the approximations that we use for graphics, there's still a lot that we would like to get rid of. One that you brought up earlier is subsurface scattering. It's kind of crazy that in 3D graphics today that we're mostly simulating a 2d manifold; we're not actually doing 3D graphics. We're bouncing light off of pieces of paper that are like origami heads or something, but we're not actually moving rays through 3D objects. Most of the time for opaque things that probably doesn't matter, but for a lot of things that are semi translucent - a lot of the things that make the world feel real and textured - we actually do need to do a better job of working with light transport in three dimensions, like through materials. And so you ask yourself, what's the role of a polygon? If the job is to think about how light interacts through three dimensional objects, the model that we've been using for the past 50 years - "let's really carefully model the outside surface of an object" - that's probably not the right representation. And so this phenomenon is that we're finding neural representations and neural rendering algorithms that are able to learn from real-world data and from very expensive simulations that would never be real time, so we're able to come up with technologies that are going to be much more realistic and convincing than we could ever do with traditional "bottom-up" rendering. Bottom-up rendering is when you're trying to model every fuzzy hair and every snowflake and every drop of water and every light photon, so that we can simulate reality. At some point, you know, we're making a shift away from this explicit, bottom-up kind of graphics towards a more top-down generated graphics where we learn, for example, how snowflakes look. When a painter paints a scene, they're not actually simulating every photon and every facet of every piece of geometry. They just they know what it's supposed to look like. And so I think neural rendering is moving in that direction, and I'm very excited about the prospects of overcoming a lot of the limitations of today's graphics, which I think are really difficult to scale. You know, the more fidelity we put in bottom-up simulation, the more work we have to do to capture textures and geometry and animate it. It becomes very expensive and really challenging. A lot is held back because we just don't have the artist bandwidth, we don't have the time or the storage to save everything. But we're going to have neural materials, neural rendering algorithms, neural radiance caches; we're going to find ways of using AI in order to understand how the world should be drawn, and that's going to open up a lot of new possibilities to make games more interesting-looking and more fun.Yeah, one of the things that I've always lamented about polygon-based graphics is that inability to represent anything like heterogeneous volumes and ray tracing that is almost impossible in real time. So I'm happy that neural rendering is going to start bridging that gap, for more complex deformable materials, fluid simulations, all these things. So that's what I hope we see in the future. Bryan Catanzaro: That's where we're headed, for sure.
    0 Commentarii ·0 Distribuiri ·41 Views