• Metal Detectorists Discover 1,200-Year-Old Graves That May Have Belonged to High-Status Viking Women
    www.smithsonianmag.com
    Metal Detectorists Discover 1,200-Year-Old Graves That May Have Belonged to High-Status Viking WomenExcavations in Norway revealed a rich variety of artifacts, including jewelry, textile tools and stones positioned in the shape of a ship Researchers think there may be as many as 20 graves at the site in southwest Norway. Sren Diinhoff / University Museum of BergenArchaeologists have unearthed coins, jewelry and stones from graves in Norway that likely belonged to high-status Viking women, reportsScience Norways Ida Irene Bergstrm.Initially discovered by a group of amateur metal detectorists in the fall of 2023, the graves date to between 800 and 850 C.E. That lines up with the beginning of the Viking Age, which ran from around 800 to 1050 C.E.During excavations, which concluded in late 2024, archaeologists found a rich variety of artifacts. One grave contained fragments of gilded oval brooches, part of a metal cauldron and a book clasp that had been repurposed as jewelry. Archaeologists think the clasp may have come from a Christian monastery.We think that the clasp in the first grave could very well have come from a Bible in England or Ireland, says Sren Diinhoff, an archaeologist with the University Museum of Bergen, toFox News Digitals Andrea Margolis. It had been ripped off and brought back to Norway where it eventually ended up as a womans brooch.In another grave, they found 11 silver coins and a necklace made of 46 glass beads. They also discovered trefoil brooches that were likely used to fasten clothing. The brooches appear to have been repurposed from the clasps of Carolingian sword belts, according to a statement from the researchers.They also found a bronze key and what is likely a frying pan, as well as items that were used to produce textiles, such as a spindle whorl, a weaving sword and wool shears. These items suggest that the woman buried here may have been the head of the household and managed the farms textile production operations.Textile production was prestigious, Diinhoff tells Science Norway. Farms that made fine clothing held high status. One of the coins is likely a rare "Hedeby coin" that was made in southern Denmark between 823 and 840 C.E. Sren Diinhoff / University Museum of BergenExperts at the University Museum of Bergen are still studying the coins. But theyve already deduced that one is a rare Hedeby coin minted in the early ninth century C.E. in southern Denmarkwhich are among the earliest known Scandinavian-made coins. The other ten coins were likely minted during the reign ofLouis I, the son of Charlemagne and a Carolingian ruler of the Franks.Some of the artifacts appear to have originated in England and Ireland, which is indicative of the Vikings long-distance trade routes. But the women may also have had their own ties to continental Europe.Both of these women had contacts outside Norway, Diinhoff tells Science Norway. It's probably no coincidence. Perhaps they came from abroad and married into the local community.Researchers didnt find any bones in the graves. Its possible that the human remains disintegrated, which is common because of the makeup of western Norways soil.But another theory is that the graves were empty to begin with. They may have beencenotaphs, or memorials honoring individuals whove been buried somewhere else. Researchers suspect this is likely the case, as the necklace appears to have been buried inside a leather pouch, rather than around someones neck. One of the graves containeda necklace made of 46 glass beads. Sren Diinhoff / University Museum of BergenThe graves are located in the municipality of Fitjar, an area along the countrys southwest coast. During the Viking Age, the site was a farm that likely belonged to the local or regional king, according to the archaeologists.Since its so close to the coast, maritime travelers may have used the farm as a rest stop. That theory is bolstered by the fact that one of the graves contained rocks positioned in the shape of a ship.On behalf of the king, shelter was provided to passing ships, which likely generated additional income, Diinhoff tells Science Norway.Archaeologists hope to return soon for further research, as they have only just started excavating a third grave. They think there may be as many as 20 graves in the areaand now, its a race against time before theyre destroyed.They are found just below the turf, and there are so many ways they can be ruined, Diinhoff tells Fox News Digital. We hope to be able to excavate a few graves every year.Get the latest stories in your inbox every weekday.Filed Under: Archaeology, Artifacts, Coins, Death, European History, History, Jewelry, Norway, Vikings
    0 Comentários ·0 Compartilhamentos ·5 Visualizações
  • In the future, we will all manage our own AI agents | Jensen Huang Q&A
    venturebeat.com
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreJensen Huang, CEO of Nvidia, gave an eye-opening keynote talk at CES 2025 last week. It was highly appropriate, as Huangs favorite subject of artificial intelligence has exploded across the world and Nvidia has, by extension, become one of the most valuable companies in the world. Apple recently passed Nvidia with a market capitalization of $3.58 trillion, compared to Nvidias $3.33 trillion. The company is celebrating the 25th year of its GeForce graphics chip business and it has been a long time since I did the first interview with Huang back in 1996, when we talked about graphics chips for a Windows accelerator. Back then, Nvidia was one of 80 3D graphics chip makers. Now its one of around three or so survivors. And it has made a huge pivot from graphics to AI.Huang hasnt changed much. For the keynote, Huang announced a video game graphics card, the Nvidia GeForce RTX 50 Series, but there were a dozen AI-focused announcements about how Nvidia is creating the blueprints and platforms to make it easy to train robots for the physical world. In fact, in a feature dubbed DLSS 4, Nvidia is now using AI to make its graphics chip frame rates better. And there are technologies like Cosmos, which helps robot developers use synthetic data to train their robots. A few of these Nvidia announcements were among my 13 favorite things at CES.After the keynote, Huang held a free-wheeling Q&A with the press at the Fountainbleau hotel in Las Vegas. At first, he engaged with a hilarious discussion with the audio-visual team in the room about the sound quality, as he couldnt hear questions up on stage. So he came down among the press and, after teasing the AV team guy named Sebastian, he answered all of our questions, and he even took a selfie with me. Then he took a bunch of questions from financial analysts.I was struck at how technical Huangs command of AI was during the keynote, but it reminded me more of a Siggraph technology conference than a keynote speech for consumers at CES. I asked him about that and you can see his answer below. Ive included the whole Q&A from all of the press in the room. Heres an edited transcript of the press Q&A.Jensen Huang, CEO of Nvidia, at CES 2025 press Q&A.Question: Last year you defined a new unit of compute, the data center. Starting with the building and working down. Youve done everything all the way up to the system now. Is it time for Nvidia to start thinking about infrastructure, power, and the rest of the pieces that go into that system?Jensen Huang: As a rule, Nvidiawe only work on things that other people do not, or that we can do singularly better. Thats why were not in that many businesses. The reason why we do what we do, if we didnt build NVLink72, who would have? Who could have? If we didnt build the type of switches like Spectrum-X, this ethernet switch that has the benefits of InfiniBand, who could have? Who would have? We want our company to be relatively small. Were only 30-some-odd thousand people. Were still a small company. We want to make sure our resources are highly focused on areas where we can make a unique contribution.We work up and down the supply chain now. We work with power delivery and power conditioning, the people who are doing that, cooling and so on. We try to work up and down the supply chain to get people ready for these AI solutions that are coming. Hyperscale was about 10 kilowatts per rack. Hopper is 40 to 50 to 60 kilowatts per rack. Now Blackwell is about 120 kilowatts per rack. My sense is that that will continue to go up. We want it to go up because power density is a good thing. Wed rather have computers that are dense and close by than computers that are disaggregated and spread out all over the place. Density is good. Were going to see that power density go up. Well do a lot better cooling inside and outside the data center, much more sustainable. Theres a whole bunch of work to be done. We try not to do things that we dont have to.HP EliteBook Ultra G1i 14-inch notebook next-gen AI PC.Question: You made a lot of announcements about AI PCs last night. Adoption of those hasnt taken off yet. Whats holding that back? Do you think Nvidia can help change that?Huang: AI started the cloud and was created for the cloud. If you look at all of Nvidias growth in the last several years, its been the cloud, because it takes AI supercomputers to train the models. These models are fairly large. Its easy to deploy them in the cloud. Theyre called endpoints, as you know. We think that there are still designers, software engineers, creatives, and enthusiasts whod like to use their PCs for all these things. One challenge is that because AI is in the cloud, and theres so much energy and movement in the cloud, there are still very few people developing AI for Windows.It turns out that the Windows PC is perfectly adapted to AI. Theres this thing called WSL2. WSL2 is a virtual machine, a second operating system, Linux-based, that sits inside Windows. WSL2 was created to be essentially cloud-native. It supports Docker containers. It has perfect support for CUDA. Were going to take the AI technology were creating for the cloud and now, by making sure that WSL2 can support it, we can bring the cloud down to the PC. I think thats the right answer. Im excited about it. All the PC OEMs are excited about it. Well get all these PCs ready with Windows and WSL2. All the energy and movement of the AI cloud, well bring it right to the PC.Question: Last night, in certain parts of the talk, it felt like a SIGGRAPH talk. It was very technical. Youve reached a larger audience now. I was wondering if you could explain some of the significance of last nights developments, the AI announcements, for this broader crowd of people who have no clue what you were talking about last night.Huang: As you know, Nvidia is a technology company, not a consumer company. Our technology influences, and is going to impact, the future of consumer electronics. But it doesnt change the fact that I could have done a better job explaining the technology. Heres another crack.One of the most important things we announced yesterday was a foundation model that understands the physical world. Just as GPT was a foundation model that understands language, and Stable Diffusion was a foundation model that understood images, weve created a foundation model that understands the physical world. It understands things like friction, inertia, gravity, object presence and permanence, geometric and spatial understanding. The things that children know. They understand the physical world in a way that language models today doint. We believe that there needs to be a foundation model that understands the physical world.Once we create that, all the things you could do with GPT and Stable Diffusion, you can now do with Cosmos. For example, you can talk to it. You can talk to this world model and say, Whats in the world right now? Based on the season, it would say, Theres a lot of people sitting in a room in front of desks. The acoustics performance isnt very good. Things like that. Cosmos is a world model, and it understands the world.Nvidia is marrying tech for AI in the physical world with digital twins.The question is, why do we need such a thing? The reason is, if you want AI to be able to operate and interact in the physical world sensibly, youre going to have to have an AI that understands that. Where can you use that? Self-driving cars need to understand the physical world. Robots need to understand the physical world. These models are the starting point of enabling all of that. Just as GPT enabled everything were experiencing today, just as Llama is very important to activity around AI, just as Stable Diffusion triggered all these generative imaging and video models, we would like to do the same with Cosmos, the world model.Question: Last night you mentioned that were seeing some new AI scaling laws emerge, specifically around test-time compute. OpenAIs O3 model showed that scaling inference is very expensive from a compute perspective. Some of those runs were thousands of dollars on the ARC-AGI test. What is Nvidia doing to offer more cost-effective AI inference chips, and more broadly, how are you positioned to benefit from test-time scaling?Huang: The immediate solution for test-time compute, both in performance and affordability, is to increase our computing capabilities. Thats why Blackwell and NVLink72the inference performance is probably some 30 or 40 times higher than Hopper. By increasing the performance by 30 or 40 times, youre driving the cost down by 30 or 40 times. The data center costs about the same.The reason why Moores Law is so important in the history of computing is it drove down computing costs. The reason why I spoke about the performance of our GPUs increasing by 1,000 or 10,000 times over the last 10 years is because by talking about that, were inversely saying that we took the cost down by 1,000 or 10,000 times. In the course of the last 20 years, weve driven the marginal cost of computing down by 1 million times. Machine learning became possible. The same thing is going to happen with inference. When we drive up the performance, as a result, the cost of inference will come down.The second way to think about that question, today it takes a lot of iterations of test-time compute, test-time scaling, to reason about the answer. Those answers are going to become the data for the next time post-training. That data becomes the data for the next time pre-training. All of the data thats being collected is going into the pool of data for pre-training and post-training. Well keep pushing that into the training process, because its cheaper to have one supercomputer become smarter and train the model so that everyones inference cost goes down.However, that takes time. All these three scaling laws are going to happen for a while. Theyre going to happen for a while concurrently no matter what. Were going to make all the models smarter in time, but people are going to ask tougher and tougher questions, ask models to do smarter and smarter things. Test-time scaling will go up.Question: Do you intend to further increase your investment in Israel?A neural face rendering.Huang: We recruit highly skilled talent from almost everywhere. I think theres more than a million resumes on Nvidias website from people who are interested in a position. The company only employs 32,000 people. Interest in joining Nvidia is quite high. The work we do is very interesting. Theres a very large option for us to grow in Israel.When we purchased Mellanox, I think they had 2,000 employees. Now we have almost 5,000 employees in Israel. Were probably the fastest-growing employer in Israel. Im very proud of that. The team is incredible. Through all the challenges in Israel, the team has stayed very focused. They do incredible work. During this time, our Israel team created NVLink. Our Israel team created Spectrum-X and Bluefield-3. All of this happened in the last several years. Im incredibly proud of the team. But we have no deals to announce today.Question: Multi-frame generation, is that still doing render two frames, and then generate in between? Also, with the texture compression stuff, RTX neural materials, is that something game developers will need to specifically adopt, or can it be done driver-side to benefit a larger number of games?Huang: Theres a deep briefing coming out. You guys should attend that. But what we did with Blackwell, we added the ability for the shader processor to process neural networks. You can put code and intermix it with a neural network in the shader pipeline. The reason why this is so important is because textures and materials are processed in the shader. If the shader cant process AI, you wont get the benefit of some of the algorithm advances that are available through neural networks, like for example compression. You could compress textures a lot better today than the algorithms than weve been using for the last 30 years. The compression ratio can be dramatically increased. The size of games is so large these days. When we can compress those textures by another 5X, thats a big deal.Next, materials. The way light travels across a material, its anisotropic properties, cause it to reflect light in a way that indicates whether its gold paint or gold. The way that light reflects and refracts across their microscopic, atomic structure causes materials to have those properties. Describing that mathematically is very difficult, but we can learn it using an AI. Neural materials is going to be completely ground-breaking. It will bring a vibrancy and a lifelike-ness to computer graphics. Both of these require content-side work. Its content, obviously. Developers will have to develop their content in that way, and then they can incorporate these things.With respect to DLSS, the frame generation is not interpolation. Its literally frame generation. Youre predicting the future, not interpolating the past. The reason for that is because were trying to increase framerate. DLSS 4, as you know, is completely ground-breaking. Be sure to take a look at it.Question: Theres a huge gap between the 5090 and 5080. The 5090 has more than twice the cores of the 5080, and more than twice the price. Why are you creating such a distance between those two?Huang: When somebody wants to have the best, they go for the best. The world doesnt have that many segments. Most of our users want the best. If we give them slightly less than the best to save $100, theyre not going to accept that. They just want the best.Of course, $2,000 is not small money. Its high value. But that technology is going to go into your home theater PC environment. You may have already invested $10,000 into displays and speakers. You want the best GPU in there. A lot of their customers, they just absolutely want the best.Question: With the AI PC becoming more and more important for PC gaming, do you imagine a future where there are no more traditionally rendered frames?Nvidia RTX AI PCsHuang: No. The reason for that is becauseremember when ChatGPT came out and people said, Oh, now we can just generate whole books? But nobody internally expected that. Its called conditioning. We now conditional the chat, or the prompts, with context. Before you can understand a question, you have to understand the context. The context could be a PDF, or a web search, or exactly what you told it the context is. The same thing with images. You have to give it context.The context in a video game has to be relevant, and not just story-wise, but spatially relevant, relevant to the world. When you condition it and give it context, you give it some early pieces of geometry or early pieces of texture. It can generate and up-rez from there. The conditioning, the grounding, is the same thing you would do with ChatGPT and context there. In enterprise usage its called RAG, retrieval augmented generation. In the future, 3D graphics will be grounded, conditioned generation.Lets look at DLSS 4. Out of 33 million pixels in these four frames weve rendered one and generated three weve rendered 2 million. Isnt that a miracle? Weve literally rendered two and generated 31. The reason why thats such a big dealthose 2 million pixels have to be rendered at precisely the right points. From that conditioning, we can generate the other 31 million. Not only is that amazing, but those two million pixels can be rendered beautifully. We can apply tons of computation because the computing we would have applied to the other 31 million, we now channel and direct that at just the 2 million. Those 2 million pixels are incredibly complex, and they can inspire and inform the other 31.The same thing will happen in video games in the future. Ive just described what will happen to not just the pixels we render, but the geometry the render, the animation we render and so on. The future of video games, now that AI is integrated into computer graphicsthis neural rendering system weve created is now common sense. It took about six years. The first time I announced DLSS, it was universally disbelieved. Part of that is because we didnt do a very good job of explaining it. But it took that long for everyone to now realize that generative AI is the future. You just need to condition it and ground it with the artists intention.We did the same thing with Omniverse. The reason why Omniverse and Cosmos are connected together is because Omniverse is the 3D engine for Cosmos, the generative engine. We control completely in Omniverse, and now we can control as little as we want, as little as we can, so we can generate as much as we can. What happens when we control less? Then we can simulate more. The world that we can now simulate in Omniverse can be gigantic, because we have a generative engine on the other side making it look beautiful.Question: Do you see Nvidia GPUs starting to handle the logic in future games with AI computation? Is it a goal to bring both graphics and logic onto the GPU through AI?Huang: Yes. Absolutely. Remember, the GPU is Blackwell. Blackwell can generate text, language. It can reason. An entire agentic AI, an entire robot, can run on Blackwell. Just like it runs in the cloud or in the car, we can run that entire robotics loop inside Blackwell. Just like we could do fluid dynamics or particle physics in Blackwell. The CUDA is exactly the same. The architecture of Nvidia is exactly the same in the robot, in the car, in the cloud, in the game system. Thats the good decision we made. Software developers need to have one common platform. When they create something they want to know that they can run it everywhere.Yesterday I said that were going to create the AI in the cloud and run it on your PC. Who else can say that? Its exactly CUDA compatible. The container in the cloud, we can take it down and run it on your PC. The SDXL NIM, its going to be fantastic. The FLUX NIM? Fantastic. Llama? Just take it from the cloud and run it on your PC. The same thing will happen in games.Nvidia NIM (Nvidia inference microservices).Question: Theres no question about the demand for your products from hyperscalers. But can you elaborate on how much urgency you feel in broadening your revenue base to include enterprise, to include government, and building your own data centers? Especially when customers like Amazon are looking to build their own AI chips. Second, could you elaborate more for us on how much youre seeing from enterprise development?Huang: Our urgency comes from serving customers. Its never weighed on me that some of my customers are also building other chips. Im delighted that theyre building in the cloud, and I think theyre making excellent choices. Our technology rhythm, as you know, is incredibly fast. When we increase performance every year by a factor of two, say, were essentially decreasing costs by a factor of two every year. Thats way faster than Moores Law at its best. Were going to respond to customers wherever they are.With respect to enterprise, the important thing is that enterprises today are served by two industries: the software industry, ServiceNow and SAP and so forth, and the solution integrators that help them adapt that software into their business processes. Our strategy is to work with those two ecosystems and help them build agentic AI. NeMo and blueprints are the toolkits for building agentic AI. The work were doing with ServiceNow, for example, is just fantastic. Theyre going to have a whole family of agents that sit on top of ServiceNow that help do customer support. Thats our basic strategy. With the solution integrators, were working with Accenture and othersAccenture is doing critical work to help customers integrate and adopt agentic AI into their systems.Step one is to help that whole ecosystem develop AI, which is different from developing software. They need a different toolkit. I think weve done a good job this last year of building up the agentic AI toolkit, and now its about deployment and so on.Question: It was exciting last night to see the 5070 and the price decrease. I know its early, but what can we expect from the 60-series cards, especially in the sub-$400 range?Huang: Its incredible that we announced four RTX Blackwells last night, and the lowest performance one has the performance of the highest-end GPU in the world today. That puts it in perspective, the incredible capabilities of AI. Without AI, without the tensor cores and all of the innovation around DLSS 4, this capability wouldnt be possible. I dont have anything to announce. Is there a 60? I dont know. It is one of my favorite numbers, though.Question: You talked about agentic AI. Lots of companies have talked about agentic AI now. How are you working with or competing with companies like AWS, Microsoft, Salesforce who have platforms in which theyre also telling customers to develop agents? How are you working with those guys?Huang: Were not a direct to enterprise company. Were a technology platform company. We develop the toolkits, the libraries, and AI models, for the ServiceNows. Thats our primary focus. Our primary focus is ServiceNow and SAP and Oracle and Synopsys and Cadence and Siemens, the companies that have a great deal of expertise, but the library layer of AI is not an area that they want to focus on. We can create that for them.Its complicated, because essentially were talking about putting a ChatGPT in a container. That end point, that microservice, is very complicated. When they use ours, they can run it on any platform. We develop the technology, NIMs and NeMo, for them. Not to compete with them, but for them. If any of our CSPs would like to use them, and many of our CSPs have using NeMo to train their large language models or train their engine models they have NIMs in their cloud stores. We created all of this technology layer for them.The way to think about NIMs and NeMo is the way to think about CUDA and the CUDA-X libraries. The CUDA-X libraries are important to the adoption of the Nvidia platform. These are things like cuBLAS for linear algebra, cuDNN for the deep neural network processing engine that revolutionized deep learning, CUTLASS, all these fancy libraries that weve been talking about. We created those libraries for the industry so that they dont have to. Were creating NeMo and NIMs for the industry so that they dont have to.Question: What do you think are some of the biggest unmet needs in the non-gaming PC market today?Nvidias Project Digits, based on GB110.Huang: DIGITS stands for Deep Learning GPU Intelligence Training System. Thats what it is. DIGITS is a platform for data scientists. DIGITS is a platform for data scientists, machine learning engineers. Today theyre using their PCs and workstations to do that. For most peoples PCs, to do machine learning and data science, to run PyTorch and whatever it is, its not optimal. We now have this little device that you sit on your desk. Its wireless. The way you talk to it is the way you talk to the cloud. Its like your own private AI cloud.The reason you want that is because if youre working on your machine, youre always on that machine. If youre working in the cloud, youre always in the cloud. The bill can be very high. We make it possible to have that personal development cloud. Its for data scientists and students and engineers who need to be on the system all the time. I think DIGITStheres a whole universe waiting for DIGITS. Its very sensible, because AI started in the cloud and ended up in the cloud, but its left the worlds computers behind. We just have to figure something out to serve that audience.Question: You talked yesterday about how robots will soon be everywhere around us. Which side do you think robots will stand on with humans, or against them?Huang: With humans, because were going to build them that way. The idea of superintelligence is not unusual. As you know, I have a company with many people who are, to me, superintelligent in their field of work. Im surrounded by superintelligence. I prefer to be surrounded by superintelligence rather than the alternative. I love the fact that my staff, the leaders and the scientists in our company, are superintelligent. Im of average intelligence, but Im surrounded by superintelligence.Thats the future. Youre going to have superintelligent AIs that will help you write, analyze problems, do supply chain planning, write software, design chips and so on. Theyll build marketing campaigns or help you do podcasts. Youre going to have superintelligence helping you to do many things, and it will be there all the time. Of course the technology can be used in many ways. Its humans that are harmful. Machines are machines.Question: In 2017 Nvidia displayed a demo car at CES, a self-driving car. You partnered with Toyota that May. Whats the difference between 2017 and 2025? What were the issues in 2017, and what are the technological innovations being made in 2025?Back in 2017: Toyota will use Nvidia chips for self-driving cars.Huang: First of all, everything that moves in the future will be autonomous, or have autonomous capabilities. There will be no lawn mowers that you push. I want to see, in 20 years, someone pushing a lawn mower. That would be very fun to see. It makes no sense. In the future, all carsyou could still decide to drive, but all cars will have the ability to drive themselves. From where we are today, which is 1 billion cars on the road and none of them driving by themselves, tolets say, picking our favorite time, 20 years from now. I believe that cars will be able to drive themselves. Five years ago that was less certain, how robust the technology was going to be. Now its very certain that the sensor technology, the computer technology, the software technology is within reach. Theres too much evidence now that a new generation of cars, particularly electric cars, almost every one of them will be autonomous, have autonomous capabilities.If there are two drivers that really changed the minds of the traditional car companies, one of course is Tesla. They were very influential. But the single greatest impact is the incredible technology coming out of China. The neo-EVs, the new EV companies BYD, Li Auto, XPeng, Xiaomi, NIO their technology is so good. The autonomous vehicle capability is so good. Its now coming out to the rest of the world. Its set the bar. Every car manufacturer has to think about autonomous vehicles. The world is changing. It took a while for the technology to mature, and our own sensibility to mature. I think now were there. Waymo is a great partner of ours. Waymo is now all over the place in San Francisco.Question: About the new models that were announced yesterday, Cosmos and NeMo and so on, are those going to be part of smart glasses? Given the direction the industry is moving in, it seems like thats going to be a place where a lot of people experience AI agents in the future?Cosmos generates synthetic driving data.Huang: Im so excited about smart glasses that are connected to AI in the cloud. What am I looking at? How should I get from here to there? You could be reading and it could help you read. The use of AI as it gets connected to wearables and virtual presence technology with glasses, all of that is very promising.The way we use Cosmos, Cosmos in the cloud will give you visual penetration. If you want something in the glasses, you use Cosmos to distill a smaller model. Cosmos becomes a knowledge transfer engine. It transfers its knowledge into a much smaller AI model. The reason why youre able to do that is because that smaller AI model becomes highly focused. Its less generalizable. Thats why its possible to narrowly transfer knowledge and distill that into a much tinier model. Its also the reason why we always start by building the foundation model. Then we can build a smaller one and a smaller one through that process of distillation. Teacher and student models.Question: The 5090 announced yesterday is a great card, but one of the challenges with getting neural rendering working is what will be done with Windows and DirectX. What kind of work are you looking to put forward to help teams minimize the friction in terms of getting engines implemented, and also incentivizing Microsoft to work with you to make sure they improve DirectX?Huang: Wherever new evolutions of the DirectX API are, Microsoft has been super collaborative throughout the years. We have a great relationship with the DirectX team, as you can imagine. As were advancing our GPUs, if the API needs to change, theyre very supportive. For most of the things we do with DLSS, the API doesnt have to change. Its actually the engine that has to change. Semantically, it needs to understand the scene. The scene is much more inside Unreal or Frostbite, the engine of the developer. Thats the reason why DLSS is integrated into a lot of the engines today. Once the DLSS plumbing has been put in, particularly starting with DLSS 2, 3, and 4, then when we update DLSS 4, even though the game was developed for 3, youll have some of the benefits of 4 and so on. Plumbing for the scene understanding AIs, the AIs that process based on semantic information in the scene, you really have to do that in the engine.Question: All these big tech transitions are never done by just one company. With AI, do you think theres anything missing that is holding us back, any part of the ecosystem?Agility Robotics showed a robot that could take boxes and stack them on a conveyor belt.Huang: I do. Let me break it down into two. In one case, in the language case, the cognitive AI case, of course were advancing the cognitive capability of the AI, the basic capability. It has to be multimodal. It has to be able to do its own reasoning and so on. But the second part is applying that technology into an AI system. AI is not a model. Its a system of models. Agentic AI is an integration of a system of models. Theres a model for retrieval, for search, for generating images, for reasoning. Its a system of models.The last couple of years, the industry has been innovating along the applied path, not only the fundamental AI path. The fundamental AI path is for multimodality, for reasoning and so on. Meanwhile, there is a hole, a missing thing thats necessary for the industry to accelerate its process. Thats the physical AI. Physical AI needs the same foundation model, the concept of a foundation model, just as cognitive AI needed a classic foundation model. The GPT-3 was the first foundation model that reached a level of capability that started off a whole bunch of capabilities. We have to reach a foundation model capability for physical AI.Thats why were working on Cosmos, so we can reach that level of capability, put that model out in the world, and then all of a sudden a bunch of end use cases will start, downstream tasks, downstream skills that are activated as a result of having a foundation model. That foundation model could also be a teaching model, as we were talking about earlier. That foundation model is the reason we built Cosmos.The second thing that is missing in the world is the work were doing with Omniverse and Cosmos to connect the two systems together, so that its a physics condition, physics-grounded, so we can use that grounding to control the generative process. What comes out of Cosmos is highly plausible, not just highly hallucinatable. Cosmos plus Omniverse is the missing initial starting point for what is likely going to be a very large robotics industry in the future. Thats the reason why we built it.Question: How concerned are you about trade and tariffs and what that possibly represents for everyone?Huang: Im not concerned about it. I trust that the administration will make the right moves for their trade negotiations. Whatever settles out, well do the best we can to help our customers and the market.Follow-up question inaudible.Nvidia Nemotron Model FamilesHuang: We only work on things if the market needs us to, if theres a hole in the market that needs to be filled and were destined to fill it. Well tend to work on things that are far in advance of the market, where if we dont do something it wont get done. Thats the Nvidia psychology. Dont do what other people do. Were not market caretakers. Were market makers. We tend not to go into a market that already exists and take our share. Thats just not the psychology of our company.The psychology of our company, if theres a market that doesnt existfor example, theres no such thing as DIGITS in the world. If we dont build DIGITS, no one in the world will build DIGITS. The software stack is too complicated. The computing capabilities are too significant. Unless we do it, nobody is going to do it. If we didnt advance neural graphics, nobody would have done it. We had to do it. Well tend to do that.Question: Do you think the way that AI is growing at this moment is sustainable?Huang: Yes. There are no physical limits that I know of. As you know, one of the reasons were able to advance AI capabilities so rapidly is that we have the ability to build and integrate our CPU, GPU, NVLink, networking, and all the software and systems at the same time. If that has to be done by 20 different companies and we have to integrate it all together, the timing would take too long. When we have everything integrated and software supported, we can advance that system very quickly. With Hopper, H100 and H200 to the next and the next, were going to be able to move every single year.The second thing is, because were able to optimize across the entire system, the performance we can achieve is much more than just transistors alone. Moores Law has slowed. The transistor performance is not increasing that much from generation to generation. But our systems overall have increased in performance tremendously year over year. Theres no physical limit that I know of.There are 72 Blackwell chips on this wafer. As we advance our computing, the models will keep on advancing. If we increase the computation capability, researchers can train with larger models, with more data. We can increase their computing capability for the second scaling law, reinforcement learning and synthetic data generation. Thats going to continue to scale. The third scaling law, test-time scalingif we keep advancing the computing capability, the cost will keep coming down, and the scaling law of that will continue to grow as well. We have three scaling laws now. We have mountains of data we can process. I dont see any physics reasons that we cant continue to advance computing. AI is going to progress very quickly.Question: Will Nvidia still be building a new headquarters in Taiwan?Huang: We have a lot of employees in Taiwan, and the building is too small. I have to find a solution for that. I may announce something in Computex. Were shopping for real estate. We work with MediaTek across several different areas. One of them is in autonomous vehicles. We work with them so that we can together offer a fully software-defined and computerized car for the industry. Our collaboration with the automotive industry is very good.With Grace Blackwell, the GB10, the Grace CPU is in collaboration with MediaTek. We architected it together. We put some Nvidia technology into MediaTek, so we could have NVLink chip-to-chip. They designed the chip with us and they designed the chip for us. They did an excellent job. The silicon is perfect the first time. The performance is excellent. As you can imagine, MediaTeks reputation for very low power is absolutely deserved. Were delighted to work with them. The partnership is excellent. Theyre an excellent company.Question: What advice would you give to students looking forward to the future?A wafer full of Nvidia Blackwell chips.Huang: My generation was the first generation that had to learn how to use computers to do their field of science. The generation before only used calculators and paper and pencils. My generation had to learn how to use computers to write software, to design chips, to simulate physics. My generation was the generation that used computers to do our jobs.The next generation is the generation that will learn how to use AI to do their jobs. AI is the new computer. Very important fields of sciencein the future it will be a question of, How will I use AI to help me do biology? Or forestry or agriculture or chemistry or quantum physics. Every field of science. And of course theres still computer science. How will I use AI to help advance AI? Every single field. Supply chain management. Operational research. How will I use AI to advance operational research? If you want to be a reporter, how will I use AI to help me be a better reporter?How AI gets smarterEvery student in the future will have to learn how to use AI, just as the current generation had to learn how to use computers. Thats the fundamental difference. That shows you very quickly how profound the AI revolution is. This is not just about a large language model. Those are very important, but AI will be part of everything in the future. Its the most transformative technology weve ever known. Its advancing incredibly fast.For all of the gamers and the gaming industry, I appreciate that the industry is as excited as we are now. In the beginning we were using GPUs to advance AI, and now were using AI to advance computer graphics. The work we did with RTX Blackwell and DLSS 4, its all because of the advances in AI. Now its come back to advance graphics.If you look at the Moores Law curve of computer graphics, it was actually slowing down. The AI came in and supercharged the curve. The framerates are now 200, 300, 400, and the images are completely raytraced. Theyre beautiful. We have gone into an exponential curve of computer graphics. Weve gone into an exponential curve in almost every field. Thats why I think our industry is going to change very quickly, but every industry is going to change very quickly, very soon.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.
    0 Comentários ·0 Compartilhamentos ·5 Visualizações
  • Report: Godfall dev Counterplay Games has quietly closed down
    www.gamedeveloper.com
    Justin Carter, Contributing EditorJanuary 13, 20251 Min ReadImage via Counterplay Games/Gearbox Publishing.At a GlanceCounterplay was reportedly co-developing an unannounced project with Jackalyptic Games it was 'disbanded' sometime last year.A developer at Jackalyptic Games claims Godfall creator Counterplay Games closed its doors over the holiday break.PlayStation Lifestyle spotted a LinkedIn post from an anonymous employee, who revealed the two studios were at work on an unannounced project. This title was said to be "supercharged" by Counterplay's contribution, and they said it was "impossible to overstate [the team's] impact. From the very first day, that team put their shoulders to the wheel like it was their baby.""Unfortunately," the Jackalyptic developer continued, "we were unable to continue our partnership into the new year, and CPG was disbanded." Game Developer can corroborate the post's existence, which was later edited to remove the mention of the studio's closure.Godfall was Counterplay's last major release, and was originally a PlayStation 5 and PC title before it was ported to other consoles. The following year, the action-looter game received a single expansion, Fire and Darkness.Prior to that game, it made Duelyst, which went offline in 2020. Last March, Counterplay reportedly laid off several staff members.At time of writing, Godfall is still purchasable on platforms like Steam and the Epic Games Store. Game Developer can confirm Counterplay's website is still active, but has no job openings listed or makes any mention of future projects.The studio's alleged closure comes following last week's shut downs of Toadman Interactive and Freejam. The former was a support studio for Warhammer Vermintide, and the latter made the Robocraft series. Jar of Sparks, an independent studio, also ceased operations last week as its leaders look for a partner to fund its debut project.Similarly, layoffs have recently befallen studios like Rocksteady, Splash Damage, and Piranha Games.Read more about:LayoffsAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Comentários ·0 Compartilhamentos ·11 Visualizações
  • Mercedes-Benzs Virtual Assistant uses Googles conversational AI agent
    www.theverge.com
    Google Clouds new Automotive AI Agent platform promises to continue conversations and reference information throughout users drives, and the first car announced with it is the new Mercedes CLA. That car has the next-generation MB.OS operating system with an upgraded MBUX Virtual Assistant. When Mercedes revealed it at CES in 2024, it didnt say which companys LLM it was running on. Meanwhile, the existing MBUX Voice Assistant system that could handle about 20 commands triggered with Hey Mercedes now includes results provided by OpenAIs ChatGPT and Microsoft Bing, but its not a conversational platform. According to Mercedes, theres a plan to roll out this upgraded system to further models that run the older Voice Assistant, but it didnt specify which ones.The new MBUX Virtual Assistant will feature four personality traits, including natural, predictive, personal, and empathetic. It can also ask you questions for additional clarity to get you what you need.RelatedGoogles new AI Agent is tailor-made to automotive uses, leveraging Google Maps data to find points of interest, look up restaurant reviews for you, give you recommendations, answer follow-up questions, and more. Google says MBUX Virtual Assistantusers will get access to nearly real time Google Maps updates. It also says it can handle complex, multi-turn dialog.The agent uses Gemini and runs on Google Clouds Vertex AI development platform, designed to help companies build out AI experiences. This is just the beginning of how agentic capabilities can transform the automotive industry, Google CEO Sundar Pichai stated in a press release.
    0 Comentários ·0 Compartilhamentos ·5 Visualizações
  • Meet Search-o1: An AI Framework that Integrates the Agentic Search Workflow into the o1-like Reasoning Process of LRM for Achieving Autonomous Knowledge Supplementation
    www.marktechpost.com
    Large reasoning models are developed to solve difficult problems by breaking them down into smaller, manageable steps and solving each step individually. The models use reinforcement learning to enhance their reasoning abilities and develop very detailed and logical solutions. However, while this method is effective, it has its challenges. Overthinking and error in missing or insufficient knowledge result from the extended reasoning process. Gaps in understanding may disrupt the entire reasoning chain, making it harder to arrive at accurate conclusions.Traditional methods in large reasoning models aim to enhance performance by increasing model size or expanding training data during the training phase. While test-time scaling shows potential, current approaches rely heavily on static, parameterized models that cannot utilize external knowledge when internal understanding is insufficient. Techniques like policy-reward combinations with Monte Carlo Tree Search, deliberate error integration, and data distillation improve reasoning but fail to internalize or adapt reasoning abilities fully. Retrieval-augmented generation (RAG) systems address some limitations by incorporating external knowledge retrieval but struggle to integrate the strong reasoning capabilities seen in advanced models. These gaps limit the ability to solve complex, knowledge-intensive tasks effectively.To solve the challenge of multi-step reasoning tasks requiring external knowledge, researchers from the Renmin University of China and Tsinghua University proposed the Search-o1 framework. The framework integrates task instructions, questions, and dynamically retrieved knowledge documents into a coherent reasoning chain to derive logical solutions and answers. Unlike traditional models that struggle with missing knowledge, Search-o1 extends the retrieval-augmented generation mechanism by including a Reason-in-Documents module. This module condenses lengthy retrieved information into precise steps, ensuring a logical flow. The iterative process continues until a complete reasoning chain and final answer are formed.The framework was compared with vanilla reasoning and basic retrieval-augmented methods. Vanilla reasoning often fails when knowledge gaps arise, while basic augmented methods retrieve overly detailed and redundant documents, disrupting reasoning coherence. The Search-o1 framework avoids these by creating searches on the fly whenever required, extracting documents, and transforming them into clear and related reasoning steps. The agentic mechanism is another feeder that guarantees appropriate knowledge integration, and the Reason-in-Documents proved to be coherent, hence keeping the reasoning quite accurate and stable.Researchers evaluated the framework on two categories of tasks: challenging reasoning tasks and open-domain question-answering (QA) tasks. The challenging reasoning tasks included GPQA, a PhD-level science multiple-choice QA dataset; mathematical benchmarks such as MATH500, AMC2023, and AIME2024; and LiveCodeBench to assess coding capabilities. The open-domain QA tasks were tested using datasets like Natural Questions (NQ), TriviaQA, HotpotQA, 2WikiMultihopQA, MuSiQue, and Bamboogle. The evaluation involved comparisons with baseline methods, including direct reasoning approaches, retrieval-augmented reasoning, and the Search-o1 framework proposed by the researchers. Tests were conducted under varying conditions using a consistent setup, which included the QwQ32B-Preview model as the backbone and the Bing Web Search API for retrieval.Results showed that QwQ-32B-Preview excelled across reasoning tasks, surpassing larger models like Qwen2.5-72B and Llama3.3-70B. Search-o1 outperformed retrieval-augmented approaches like RAgent-QwQ-32B with notable coherence and knowledge integration gains. For example, on average, Search-o1 exceeded RAgent-QwQ-32B and QwQ-32B by 4.7% and 3.1%, respectively, and achieved a 44.7% improvement over smaller models like Qwen2.5-32B. Comparisons with human experts on the GPQA extended set revealed Search-o1s superiority in integrating reasoning strategies, particularly in science-related tasks.In conclusion, the proposed framework addressed the problem of knowledge inadequacy in large reasoning models by combining retrieval-augmented generation with a Reason-in-Documents module to allow better use of external knowledge. This framework can be a baseline for future research to enhance retrieval systems, document analysis, and intelligent problem-solving across complex domains.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our65k+ ML SubReddit. Divyesh Vitthal Jawkhede+ postsDivyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)
    0 Comentários ·0 Compartilhamentos ·20 Visualizações
  • Dynasty Warriors: Origins - Heres What Comes in Each Edition
    www.ign.com
    Dynasty Warriors: Origins comes out for PS5, Xbox Series X|S, and PC on January 14 but only if you buy the more expensive digital deluxe edition. The standard edition is out January 17 (see it at Best Buy, where you'll also get a free $10 gift card with preorder). The latest is a series that dates back to the 1990s, Dynasty Warrior: Origins is a good starting point for newcomers, because it effectively reboots the series. Its also terrific see our 9/10 Dynasty Warriors: Origins review for details. The game is available to preorder now in two editions, complete with different preorder bonuses. Read on for the breakdown of what comes with each edition.Dynasty Warriors: Origins - Standard EditionFree $10 Gift Card IncludedDynasty Warriors: Origins (PS5)Out January 17.$69.99 at Best BuyPS5Get it at Amazon - $69.99Get it at Best Buy - $69.99 - (free $10 gift card)Get it at GameStop - $69.99Get it at Target - $69.99Get it at Walmart - $69.99Get it at PS Store (digital) - $69.99Xbox Series X|SGet it at Amazon - $69.99Get it at Best Buy - $69.99 - (free $10 gift card)Get it at GameStop - $69.99Get it at Target - $69.99Get it at Walmart - $69.99Get it at Xbox Store (digital) - $69.99PCGet it on Steam - $69.99Preorder a the standard edition, and youll receive the game itself, plus the preorder bonus DLC costumes (see below).Dynasty Warriors: Origins - Digital Deluxe EditionPS5: PS Store - $89.99XSX: Best Buy | GameStop | Walmart | Xbox Store - $89.99PC: Steam - $89.99The digital deluxe edition gets you the following:72 hours of early access (starting January 14)Official Book & Original Soundtrack (Digital Edition): The Official Book & Original Soundtrack can be accessed in game. The Official Book is full of original illustrations of major events in the Three Kingdoms period, along with never-before-seen information about characters and charts depicting their relationships to one another, all centered around the grand story of the new "Dynasty Warriors," told from the perspective of a single protagonist. The Original Soundtrack includes the 20 original tracks that have been arranged for this title.Letters: Letters provide gold for buying things like weapons and portable items, and pyroxene for creating gems. When you select "New Game" and play through the story, letters will be delivered to you at Inns. The letter that provide pyroxene will only be delivered after you reach the point in the game where it becomes possible to create gems.Dynasty Warriors: Origins Preorder BonusesThe preorder bonus situation for Dynasty Warriors: Origins is a bit complicated, so bear with me here. You get different stuff depending on which edition you preorder, and whether you preorder a physical or digital copy. Heres the breakdown:Preorder a physical copy of the game, and youll receive the following protagonist DLC costumes:"Garb of the Azure Bird""Garb of the Crimson Bird""Garb of the Emerald Bird""Garb of the Violet Bird"Preorder a digital copy of the standard edition, and youll get the following DLC costume:Protagonist's costume "Nameless Warrior Garb, a Wo Long: Fallen Dynasty collaboration costume that can be worn in the gamePreorder the digital deluxe edition, and youll get the following:72 hour early access (January 14)Early Works Soundtrack Collection (Digital Edition): A soundtrack featuring a total of 191 original music tracks from the series, including background music from titles spanning from "Dynasty Warriors 2" to "Dynasty Warriors 5 Empires," as well Omega Force's first title, "Dynasty Warriors." The soundtrack can be played by accessing the main menu, Special Content, and then Music, and selecting "Early Works Soundtrack Collection."Protagonist's costume "Nameless Warrior Garb, a Wo Long: Fallen Dynasty collaboration costume that can be worn in the gameAdditionally, Best Buy is offering a preorder bonus of its own:Best Buy - $10 digital gift cardPhew.Dynasty Warriors: Origins DemoNot sure if you want to put your money down for this one? You can try a demo of the game for free.What Is Dynasty Warriors: Origins?PlayDynasty Warriors: Origins is a hack-and-slash game that features a new nameless hero protagonist, which is meant to act as an on-ramp for anyone whos never tried the series before. This installment is billed as having the most exhilarating action in the series' history, Dynasty Warriors: Origins also harnesses the power of modern gaming hardware to pit your character against the most onscreen enemies ever.And if you want to know why its so good, in IGNs Dynasty Warriors: Origins review, critic Jada Griffin wrote:If Dynasty Warriors: Origins is meant to be a new beginning, its one that gets off to a masterful start. It doesnt just have the largest amount of enemies the series has thrown on screen at once, it also deepens its combat, improves its storytelling without getting in the way of the action, and provides a healthy amount of replayability and postgame content all while looking better than ever. Your amnesiac hero is a bit too much of a blank slate at times, but the impressive ensemble cast made this story sing as I grew to care about the characters around him, big and small. Origins is both a great entry point for newcomers and a triumphant return for veterans like me who felt the last few entries had become stale or missed the mark. It feels like the series I once loved is finally back. Other Preorder GuidesChris Reed is a commerce editor and deals expert for IGN. He also runs IGN's board game and LEGO coverage. You can follow him on Bluesky.
    0 Comentários ·0 Compartilhamentos ·19 Visualizações
  • The Biggest Movie Sequels and Prequels Coming in 2025
    www.denofgeek.com
    Electric Boogaloo. Die Harder. 2 Fast 2 Furious. Movie continuations are a wonderful thing, if only because they give us these wonderful titles.Sadly, 2025 doesnt offer anything quite so iconic in the way of nomenclature, but it does feature plenty of sequels and prequels that check in on our favorite movie characters. Whether its Rene Zellwegers Bridget Jones back for another awkward outing or John Kramer continuing his unique form of self-help, 2025 is full of continuing stories and first chapters.What started as a modern update on Pride and Prejudice has taken on a life of its own, as the never-not-nervous Bridget Jones continues to make the worst possible decisions. Nine years after the third installment, Bridget Joness Baby, Rene Zellwegers beloved character now lives as a widowed mother of two and finds herself once again courted by two suitors.This time, the potentials include the much younger but also incredibly handsome Roxster (Leo Woodall) and the more age-appropriate and also incredibly handsome Mr. Wallaker (Chiwetel Ejiofor). Of course, her old interests Daniel Cleaver (Hugh Grant) and Mark Darcy (Colin Firth) are still around, although the latter in the form of a ghost.Paddington in Peru (Feb 14)Paddington the Bear taught us that if were kind and polite, the world will be right. Well, the world is certainly not right, so its a good thing that Paddingtons back with more words of comfort and wisdom.Paddington in Peru sends Paddington (voiced with warmth and kindness by Ben Wishaw) and his adoptive family the Browns to Peru, where he comes to the aid of his wise Aunt Lucy. Paul King, who directed Paddingtons first two outings, sits out this adventure and music video veteran Dougal Wilson steps in. UK viewers, who got to see Paddington in Peru in November 2024, didnt mind the change, as it currently sits at 92% on Rotten Tomatoes.Final Destination: Bloodlines (May 16)Unlike most series on this list, time doesnt diminish the power of the Final Destination franchise. After all, Death never ages. And thats a good thing too, since its been 14 years since the last entry, the excellent Final Destination 5. Even better, Final Destination: Bloodlines features one last appearance by Tony Todd as knowing mortician William Bludworth, before the actors untimely death last year.Outside of that, we dont know much about Bloodlines. Zach Lipovsky and Adam Stein step in as directors, having earned a Daytime Emmy nomination for the Disney XD series Mech-X4. Stargirls Brec Bassinger takes the lead, but the real star of a Final Destination movie is always the invisible presence of Death and the outlandish ways it kills people.Mission: Impossible The Final Reckoning (May 23)Our lives are not defined by any one action, intones IMF computer whiz Luther (Ving Rhames) at the start of the trailer for Mission: Impossible Final Reckoning. Its a good thing that the Mission: Impossible movies have lots and lots actions to choose from.Still, its clear that The Final Reckoning intends to put a definite note on the series. Originally titled Mission: Impossible Dead Reckoning Part Two, The Final Reckoning isnt just about Ethan Hunt (Tom Cruise) facing off against an evil AI called the Entity. It also serves as a summation of Hunts career, alongside old friends Luther and Benji (Simon Pegg), as well as new edition Grace (Hayley Atwell).Karate Kid: Legends (May 30)We know the reason that studios like legacy sequels, as the name recognition all but guarantees an audience. At the same time, legacy sequels sometimes obscure the fact that, if a concept is good enough, then people will show up regardless of whos involved.Join our mailing listGet the best of Den of Geek delivered right to your inbox!Thats certainly the case with Karate Kid: Legends. Yes, the legacy sequel pairs original kid Daniel LaRusso (Ralph Macchio), most recently seen on Cobra Kai, with Jackie Chan as Mr. Han, the teacher in the 2010 reboot movie The Karate Kid. But the real appeal is always going to be the story of a youngster learning how to do cool martial arts, in this case Li Fong (Ben Wang).The Accountant 2 (April 25)Lots of movies have tried to present autism as some sort of superpower, but few have pulled it off like The Accountant (2016). Directed by Gavin OConnor, The Accountant stars Ben Affleck as a man whose tendencies make him excellent as an accountant for criminal organizations, until hes forced to work for the Treasury Department by Director King (J.K. Simmons). The film has become something of a cult favorite since its initial release, leading the way to The Accountant 2, once against directed by OConnor and starring Affleck. The Account 2 finds Afflecks Christian Wolff teaming with his brother Brax (Jon Bernthal) to investigate the death of a friend. Anna Kendrick, who played a woman in peril in the first film, does not return, replaced by Daniella Pineda.From the World of John Wick: Ballerina (June 6)Half the fun of the John Wick franchise has been the slow revelation of the byzantine assassin network that gets fleshed out with each film in the series. At least, thats what Lionsgate is counting on, given that they cant really keep making movies about Keanu Reevess dog-lover/hired killer.From the World of John Wick: Ballerina (yes, that is the official title and therefore it must be always used) switches gears from Wick to Eve Macarro (Ana de Armas), a dancer learning the ways of the Ruska Roma killers. Because its set between Chapters 3 and 4 of John Wick, Reeves will appear as Wick, as will Ian McShane as Winston and the late, great Lance Reddick as Charon.28 Years Later (June 20)28 Years Later ranks high on the most anticipated movies on this list, in part because of the delay between its release and the original 28 Days Later in 2002 (no disrespect to the solid but less effective 2007 sequel 28 Weeks Later, directed by Juan Carlos Fresnadillo). It also ranks high because 28 Years Later marks a new collaboration between director Danny Boyle and screenwriter Alex Garland, the latter of whom has moved on to making his own features, such as last years Civil War.That said, one person who wont be returning for the sequel is Cillian Murphy, contrary to initial reports. Instead, 28 Years Later will follow a man played by Aaron Taylor-Johnson, who must leave his peaceful island home and brave the Rage Virus-infested mainland. Along the way, hell meet fellow survivors played by Ralph Fiennes and Jodie Comer.M3GAN 2.0 (June 27)At this point, we still dont know anything about the plot to M3GAN 2.0, Blumhouses follow-up to the surprise hit about an AI doll that gets a little murder-y when protecting orphan Cady (Violet McGraw), even if it means killing Cadys aunt and M3GANs creator Gemma (Allison Williams).Still, Blumhouse seems determined to stick to the formula of the first movie. Gerard Johnstone and Akela Cooper return to direct and write, respectively, and the addition of Jemaine Clement to the cast suggests that theyll keep the comedic tone. Well also get a focus on practical effects, with dancer Amie Donald giving M3GAN her uncanny body language, while YouTuber Jenna Davis is back as the dolls voice.Jurassic World: Rebirth (July 2)Like Karate Kid: Legends, Jurassic World: Rebirth seems like a legacy sequel that misses the primary appeal of its movie. If you make a cool dinosaur movie, well probably go and see it. Dinosaurs are awesome (it does, however, have to be a cool dinosaur moviesomething the makers of 65 forgot).Jurassic World: Rebirth will certainly have cool dinosaurs, especially since it comes from director Gareth Edwards. Evans got his start on the impressive looking indie Monsters, which got him a gig directing the MonsterVerse Godzilla. However, it also has big name stars such as Scarlett Johansson and Mahershala Ali, and it builds off the events of Jurassic World: Dominion, all of which might distract from the joy of just seeing thunder lizards on screen.I Know What You Did Last Summer (July 18)Okay, this one might not actually fit on this list, because were not 100% if I Know What You Did Last Summer is a sequel or a reboot. Heck, were not even completely sure if the movies going to be called I Know What You Did Last Summer. Initial reports suggested that the film would be a reboot/remake, with Jennifer Kaytin Robinson directing the story of teenagers, led by Madelyn Cline of Glass Onion, hunted by a killer who knows their dirty secret. But now, original stars Jennifer Love Hewitt and Freddie Prinze, Jr. have joined the cast, which indicates a continuation of the first film, which already had multiple sequels. Just how many yellow-coated stragglers did these people kill?Based on the kids book by Aaron Blabey, The Bad Guys was a unexpected family hit for DreamWorks, thanks to its slick presentation and evergreen story about villains doing the right thing. Thats enough to justify a sequel, coming later this Summer.The Bad Guys 2 reunites Mr. Wolf (Sam Rockwell), Mr. Snake (Marc Maron), Mr. Shark (Craig Robinson), Mr. Piranha (Anthony Ramos), and Ms. Tarantula (Awkwafina). This time, the team meets its match in the Bad Girls, a rival team voiced by Danielle Brooks, Natasha Lyonne, and Maria Bakalova.Freakier Friday (August 8)Jamie Lee Curtis takes a break from reprising her role as Laurie Strode to reprise a very different beloved character, Tess Coleman. The mother of Anna Coleman (Lindsay Lohan), Tess got to know her daughter real well when the two of them swapped bodies in the 2003 comedy Freaky Friday.Late Night and The High Note director Nisha Ganatra checks in on the Colemans 20 years later, when they swap bodies once again. Most of the original movies cast returns, including Mark Harmon as Tesss husband and Chad Michael Murray as Annas (ex?) boyfriend Jake. Theyll be joined by Julia Butters (best known as the littlest acting judge in Once Upon a Time in Hollywood) and Manny Jacinto from The Good Place and The Acolyte.Nobody 2 (August 15)The strange late-career of Bob Odenkirk continues with action movie Nobody 2. Unlike his slick loser Saul Goodman or his little-women-hugging dad, Odenkirks Nobody character Hutch Mansell is a former government assassin who just wants a normal life in the suburbs, but must take up killing after a drug lord threatens his family.Derek Kolstad, co-creator of John Wick and writer of the first film, returns as one of the writers for Nobody 2, which puts Hutch on a new mission. Indonesian action director Timothy Tjahjanto takes over for this adventure, which adds to the cast Chris Pine, McKenna Grace, and John Ortiz.Initially, it looked like Insidious: The Red Door would close the Insidious franchise in 2023. But that movie turned a profit that Blumhouse couldnt ignore, resulting in a return visit to the supernatural world that James Wan and Leigh Whannell created with Insidious in 2010.As its title suggests, Thread: An Insidious Tale wont focus on the Lambert family nor on investigator Elise Rainier. Instead, Mandy Moore and Kumail Nanjiani will play new characters, in a film written and directed by Jeremy Slater, writer of Fantastic Four (2015) and Godzilla x Kong: The New Empire.The Conjuring: Last Rites (September 5)Insidious isnt the only James Wan movie to get a sequel in 2025. The Conjuring universe continues with a fourth entry in the mainline series, The Conjuring: Last Rites.Patrick Wilson and Vera Farmiga are back as real-world investigators Ed and Lorraine Warren. Wan has stated that Last Rites will put an end to the duos adventures, which may or may not mean that the movies adapting one of the final cases of the actual Warrens. Michael Chaves, who made lesser Conjuring Universe entries The Curse of La Llorona and The Nun II, returns to direct, having made The Conjuring: The Devil Made Me Do It.Downton Abbey 3 (September 12)The Downton Abbey movies from 2019 and 2022 felt like delightful bonuses, one last chance to visit the Crawleys and the people who work in the titular house. 2022s Downton Abbey: A New Era felt like the end of the story, with matriarch Lady Violet dying at the end of the film, before the actor Maggie Smiths own passing in 2024.But this fall sees a new Downton Abbey film, one that brings back creator Julian Fellowes and director Simon Curtis, as well as the principal cast, including Hugh Bonneville and Elizabeth McGovern as Robert and Cora Crawly, Michelle Dockery and Laura Carmichael as their daughters Mary and Edith, and Jim Carter and Phyllis Logan as chief servants Mr. Carson and Mrs. Hughes. Paul Giamatti returns as Coras brother Harold Levinson.Saw XI (September 26)Another month, another sequel to a James Wan and Leigh Whannel movie! When Saw X proved an unlikely hit in 2023, production began on Saw XI, with Kevin Greutert directing and Patrick Melton and Marcus Dunstan writing.Yet, we dont have any idea what Saw XI will be about, or even when it will take place. Before you get snotty and point out that Saw movies are just about gory kills, keep in mind that the series has an insanely layered narrative, one that keeps bringing Tobin Bell back to the series, even though his character John Kramer died at the end of the third film. Will Saw XI be another prequel, like its immediate predecessor? Or will we keep moving forward with new revelations? As long as Chris Rocks not involved, everyone will certainly be happy.Tron: Ares (October 10)Tron and Tron: Legacy have a fairly straightforward premise. A human goes inside of a computer and discovers a world ruled by cool neon costumes. Tron: Ares seeks to flip the series on its head with an inverted story.Directed by Joachim Rnning and written by Jesse Wigutow and Jack Thorne, Tron: Ares stars Jared Leto as Ares, a program who leaves the system and joins the real world. The reversal gives the film a chance to look at the way AI has bled into modern society, and hopefully wont lose any of the franchises distinctive visuals.The Black Phone 2 (October 17)Extraneous and unlikely sequels are nothing new to the horror genre (see: the Saw XI entry above). But weve been scratching our heads ever since director Scott Derrickson announced The Black Phone 2. After all, the 2021 originalco-written by Derrickson and C. Robert Cargill and based on a Joe Hill storyended kind of definitively. Finney (Mason Thames) escaped the basement of the Grabber (Ethan Hawke), thanks to his psychic sister (Madeleine McGraw) and a phone that allowed him to communicate with past victims.How the heck is Finney going to use another black phone? You know what, as long as Derrickson directs The Black Phone 2 with as much style as he did the original, and as long as we get more fantastic performances from the cast, we can probably stop worrying about the plot contrivances, just like we do for every other horror sequel.Mortal Kombat 2 (October 24)Heres the thing about the 2021 Mortal Kombat movie directed by Simon McQuoid. It had a lot of good things going for it, including cool takes on classic characters Kano (Josh Lawson), Scorpion (Hiroyuki Sanada), and Sub-Zero (Joe Taslim). And it had a compelling lead in new character Cole Young (Lewis Tan).But it didnt actually feature the titular Mortal Kombat tournament, which is the whole point of the story. So hopefully Mortal Kombat 2, once again directed by McQuoid but now written by Jeremy Slater, can rectify the problem. Theyre off to a good start with the addition of Karl Urban as Johnny Cage. If we can get some justice for Goro, too, then we might have the first good Mortal Kombat movie on our hands.Predator: Badlands (November 7)Director Dan Trachtenberg shocked everyone with Prey, a Predator sequel that pit a Comanche woman against an alien hunter in 1719. Hopefully, Trachtenberg can do the same with the follow-up Predator: Badlands.Badlands leaps from the past to the far future, where Elle Fanning plays twin sisters trying to survive an apocalyptic wasteland. Early reports have suggested that the wasteland serves as a training ground for Predators, which sounds an awful lot like 2010s underrated Predators. But its hard to believe that Trachtenberg doesnt have something more unexpected in mind for this latest entry.Now You See Me 3 (November 14)With its third entry, Now You See Me is establishing itself as the mid-level blockbuster franchise that we rarely get these days, something that everyone sees, everyone enjoys, and then everyone seems to forget about, at least until the next one comes around.Now You See Me 3 brings in Zombieland director Ruben Fleischer as the latest slick filmmaker to drive the series, taking over from Louis Leterrier on the first one and Jon M. Chu from the sequel. Fleischer reunites with mainstays Jesse Eisenberg and Woody Harrelson who, along with Mark Ruffalo, Isla Fischer, and Morgan Freeman, play magicians who pull off unlikely heists.Wicked: For Good (November 21)Speaking of Jon M. Chu, whats he up to these days? Oh yeah, he made the third-biggest movie of 2024, Wicked. And hes back for the second part of the Broadway musical, now titled Wicked: For Good.Wicked: For Good picks up after Elphabas (Cynthia Erivo) gravity-defying rejection to help the duplicitous Madame Morrible (Michelle Yeoh) and the Wizard (Jeff Goldblum as himself), leaving behind her popular friend Galinda (Ariana Grande). Musical fans have warned that the second act of Wicked pales in comparison to the first, but Chus convinced hes got enough tricks to make For Good just as good as its predecessor.Zootopia 2 (November 26)Honestly, it feels a bit like tempting fate for Disney to return to the world of Zootopia. The first movies colorful visuals and excellent voice acting distracted from a creaky central metaphor that falls apart under the slightest scrutiny.Still, theres no denying that Ginnifer Goodwin and Jason Bateman created two of Disneys best new characters, so we cant get too grouchy about them returning for a second go. This time, theyll be joined by Ke Huy Quan, the latest quarry for Judy Hopps and Nick Wilde. Five Nights at Freddys 2 (December 5)To anyone over the age of 25, Five Nights at Freddys seems like a pretty basic premise, one that was exhausted in the first movie. Theres a scary pizza place, animatronics come to life, they spook the new watchman. We get it.Anyone under the age of 25 can tell you that creator Scott Cawthon created a rich, spiraling mythology for the Five Nights at Freddys games. Five Nights at Freddys 2, which will feature Matthew Lillard again, is just bringing that mythology to life, adapting the second game in the long-running series.The SpongeBob Movie: Search for SquarePants (December 19)Given that The SpongeBob SquarePants Movie (2004) has one of the funniest moments in cinematic history (Goofy Goober Rock), its hard to begrudge a fourth film in the franchise.In this latest entry, SpongeBob (Tom Kenny) faces off with the mystical force known as the Flying Dutchman (Mark Hamill). What more do you need? There will be lots of absurdity, Patrick Star (Bill Fagerbakke) will be dumb, Squidward (Rodger Bumpass) will be grouchy, and youll still laugh like you did when you were six years old.Avatar: Fire and Ash (December 19)First, he gave you trees. Then, he gave you water. Now, Academy Award winner James Cameron gives you fire! Yes, its easy to mock the Avatar series, which has been called Dances With Smurfs since its first outing in 2009. But as hes proved then, and then again with 2022s Avatar: The Way of Water, and really throughout his career, no one does spectacle like James Cameron.So well stop pretending were too good for Avatar: Fire and Ash and instead well get excited to see how Jake Sully (Sam Worthington), Neytiri (Zoe Saldaa), and Kiri (Sigourney Weaver) deal with Colonel Quaritch (Stephen Lang) this time. And well get even more excited about how Cameron will push cinema forward by finding new ways to capture the allure of fire.Dirty Dancing 2 (TBA)For decades, Dirty Dancings just been sitting in the corner, languishing while an NPR hosts script about the Cuban revolution got retrofitted and stripped of all character to become the disastrous Dirty Dancing: Havana Nights (seriously, its a crazy story). Well no more!Jennifer Grey is back as Baby Houseman, the young woman who came out of her shell in 1963, thanks to teacher Johnny Castle (Patrick Swayze). Jonathan Levine, director of Long Shot and 50/50, will guide the new project, which is in production for Lionsgate.The Strangers: Chapter 2 (TBA)Among the many terrifying parts of The Strangers (2008) was the moment when a victim (Liv Tyler) asked the masked trio why they were torturing her and her boyfriend. Because you were home, comes the answer, chilling for its lack of meaning. Somehow, neither a sequel nor the reimagining The Strangers: Chapter 1 has ruined that moment by telling us too much about that killer.And yet, despite the risk the series runsand, honestly, despite the lukewarm response to The Strangers: Chapter 1, The Strangers: Chapter 2 continues siccing Scarecrow, Dollface, and Pin-up Girl on unsuspecting victims. Veteran Renny Harlin is back to continue the story, as is the last films survivor, played by Madelaine Petsch.Happy Gilmore 2 (TBA)Like the great Carl Weathers who played him, Chubbs Peterson might be up in heaven with an alligator and Abraham Lincoln, but Happy Gilmore lives on. Adam Sandler returns to the character he last played in 1996 for Happy Gilmore 2, alongside Julie Bowen as his love interest Virginia and Christopher McDonald as the pompous Shooter McGavin.Back in 1996, Sandlers absurd look at the world of professional golf seemed fresh and exciting. It will be interesting to see if Sandler, whos dulled his comic persona with endless middling comedies, can find the fire needed to play a hockey player turned golf pro.Even the most cynical movie-goer, the type of person who grouches about the endless sequels and dearth of originality in Hollywood, has to like the Knives Out movies. After all, theyre less continuing stories and more new mysteries each time, the sole connecting tissue being the Southern fried detective Benoit Blanc, played with droll charisma by Daniel Craig.As with the previous outings Knives Out and Glass Onion, Rian Johnson has assembled a fantastic cast to play the suspects in Blancs latest whodunit. Newcomers include Josh OConnor, Glenn Close, Josh Brolin, and more. And surely, Johnsons regulars Noah Segan and Joseph Gordon-Levitt will show up in some capacity. As long as Johnson keeps getting wonderful performances out of Craig, hes free to make these movies until the end of time.
    0 Comentários ·0 Compartilhamentos ·18 Visualizações
  • Massive data breach exposes precise locations for users of popular apps
    9to5mac.com
    A huge data breach involving Gravy Analytics has appeared to expose precise location data for millions of users of popular smartphone apps like Candy Crush, Tinder, MyFitnessPal, and more. Heres what you should know about the unfolding breach.Gravy Analytics breach impacts users of many top smartphone appsGravy Analytics, a location data broker that holds data from millions of iPhone and Android users, has been hacked.Last week, a hacker claimed to have pulled off the breach, as was first reported by 404Media. But now, data has started being released that confirms the assertionand shows just how bad it is.Millions of pieces of precise location data have been released, showing users most visited locations such as their home, workplace, and more. The existence of this data reportedly finds its origins in an app bidding process called real-time bidding, which determines the ads that get shown to users.Zach Whittaker at TechCrunch explains:During that near-instant auction, all of the bidding advertisers can see some information about your device, such as the maker and model type, its IP addresses (which can be used to infer a persons approximate location), and in some cases, more precise location data if granted by the app user, along with other technical factors that help determine which ad a user will be displayed. But as a byproduct of this process, any advertiser that bids or anyone closely monitoring these auctions can also access that trove of so-called bidstream data containing device information. Data brokers, including those who sell to governments, can combine that collected information with other data about those individuals from other sources to paint a detailed picture of someones life and whereabouts.Gravy Analytics is one such data broker, and now its data has been breached and has begun leaking publicly online.Users of many popular ad-serving apps have been impacted.Joseph Cox at WIRED writes:The list includes dating sites Tinder and Grindr; massive games such as Candy Crush, Temple Run, Subway Surfers, and Harry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoos email client; Microsofts 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.You can find a full list that someone has compiled here.Good news for iPhone users?Information on the breach is still emerging, but theres one early sign of good news for iPhone users in particular.Baptiste Robert, CEO of digital security firm Predicta Lab, told TechCrunch that if you rejected an apps request to track you, your data has not been shared by that app.Roberts referring to the Ask App Not to Track permission prompt Apple has built into iOS.In a post on X, Robert further encourages users to go to Settings Privacy & Security Tracking and disable apps from even being allowed to ask to track you. Youll also see on that screen if youve ever previously granted tracking permission or not.Theres been no official statement from Apple to this point, but if Robert is correct, then there should be far fewer iPhone users impacted by the Gravy Analytics breach as a result.Well keep you posted on key developments in the Gravy Analytics breach as more information is revealed.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comentários ·0 Compartilhamentos ·17 Visualizações
  • Tech Guy Doing Bizarre Things to Live Forever Says He Now Suffers From Endless Hunger
    futurism.com
    Image by Patricia de Melo Moreira / AFP via Getty / FuturismIn his quest to cheat death, 47-year-old tech founder Bryan Johnson says he's become ravenous for something besides everlasting life.Johnson admitted during the course of filming "Don't Die: The Man Who Wants To Live Forever," a new Netflix documentary about his biohacking travails, that the strict diet he follows leaves him seriously craving more.The Kernel and Braintree founder who was also an early investor in Futurism, but has had no involvement with the site for years follows an extreme diet that consists mostly of veggie bowls and fruity, nutty puddings. Back in 2023, when Johnson's bizarre anti-aging "blueprint protocol" started to hit the mainstream, social media users were startled to discover that he eats all his meals between 6 and 11 AM.During an interaction with a curious visitor to his home in "Don't Die," Johnson was asked if he was "ever hungry.""Im pretty hungry," the millionaire confessed. "The saddest part of my day is the last bite."Though there's some evidence that short-term "fasting-mimicking diets" can reduce some signs of aging, but recent research has found links between the practice and an increased risk of fatal cardiovascular disease and Johnson's restrictive eating habits sometimes sound more like avery expensive eating disorder than anything else.Nevertheless, Johnson insists in "Don't Die" that his diet which accompanies the many other bizarre methods he uses to try to reduce his biological age back to 18, including siphoning his own son's blood into his body and injecting Botox into his penis [i simply...] makes him feel amazing."I have found more relief in demoting my mind and elevating my body than I have in my entire life," he explained. "It feels so liberating to me because my entire life, I was desperate to be free from myself."As the New York Post reports, Johnson also divulged after filming that one of the 54 pills he takes every morning as he tries to turn back the hands of time may actually have been accelerating his aging.After boasting about the potential "longevity benefits" of rapamycin a cancer drug that was shown to have some anti-aging effects in mice trials that Johnson took for years despite it not being FDA-approved for such usage during the filming of "Don't Die," the tech guru admitted after the documentary wrapped that he now believes it was doing the opposite."Despite the immense potential from pre-clinical trials," he wrote in a post on X-formerly-Twitter in September, "my team and I came to the conclusion that the benefits of lifelong dosing of rapamycin do not justify the hefty side-effects."Among those adverse effects were, as Johnson expounded, "intermittent skin/soft tissue infections, lipid abnormalities, glucose elevations, and increased resting heart rate." Translation: something in his massive pharmaceutical cocktail was inducing physical effects that seem a lot like those that happen as humans age."With no other underlying causes identified," he continued in the lengthy post, "we suspected Rapamycin, and since dosage adjustments had no effect, we decided to discontinue it entirely."Between being hungry all the time and taking medications off-label that made him age faster, Johnson seems to believe that spending tons of money to reverse the natural processes of life is worthwhile. Far be it from us to criticize the way he chooses to spend his money and time, but that does seem like a great way to waste both wealth and middle age.More on Johnson's aging "hacks": Anti-Aging CEO Injects Face With Strange Treatment, Experiences Bizarre ReactionShare This Article
    0 Comentários ·0 Compartilhamentos ·19 Visualizações
  • LatHire: UI/UX Designer
    weworkremotely.com
    Time zones: EST (UTC -5), MST (UTC -7), ART (UTC -3), UTC -4, UTC -4:30, UTC -3, UTC -2We are looking for a dynamic UI/UX designer. Who will be responsible for the user experience (UX) and user interface (UI) design of our various digital assets. You will ensure that all elements of the online user experience are optimized for improved usability, usefulness, and exceptional visual design.The successful candidate will evidence a passion for delivering adaptive and creative solutions to UI/UX design problems by staying up to date with best practices and emerging trends in user experience design and user interface technology.Job ResponsibilitiesInvestigating user experience design requirements for our suite of digital assets.Developing and conceptualizing a comprehensive UI/UX design strategy for the brand.Producing high-quality UX design solutions through wireframes, visual and graphic designs, flow diagrams, storyboards, site maps, and prototypes.Designing UI elements and tools such as navigation menus, search boxes, tabs, and widgets for our digital assets.Testing UI elements such as CTAs, banners, page layouts, page designs, page flows, and target links for landing pages.Job Requirements2+ years of proven experience as a UI Designer or in a similar role.A strong portfolio showcasing exceptional UI designs that reflect contemporary US design trends and standards.Proficiency in Figma and a solid understanding of design principles, typography, color theory, and layout.
    0 Comentários ·0 Compartilhamentos ·18 Visualizações