0 Yorumlar
0 hisse senetleri
154 Views
Rehber
Rehber
-
Please log in to like, share and comment!
-
VENTUREBEAT.COMNvidia unveils AI foundation models running on RTX AI PCsJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreNvidia today announced foundation models running locally on Nvidia RTX AI PCs that supercharge digital humans, content creation, productivity and development.GeForce has long been a vital platform for AI developers. The first GPU-accelerated deep learning network, AlexNet, was trained on the GeForce GTXTM 580 in 2012 and last year, over 30% of published AI research papers cited the use of GeForce RTX. Jensen Huang, CEO of Nvidia, made the announcement during his CES 2025 opening keynote. Now, with generative AI and RTX AI PCs, anyone can be a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow and LM Studio enable enthusiasts to use AI models in complex workflows via simple graphical user interfaces.NIM microservices connected to these GUIs will make it effortless to access and deploy the latest generative AI models. Nvidia AI Blueprints, built on NIM microservices, provide easy-to-use, preconfigured reference workflows for digital humans, content creation and more.To meet the growing demand from AI developers and enthusiasts, every top PC manufacturer and system builder is launching NIM-ready RTX AI PCs.AI is advancing at light speed, from perception AI to generative AI and now agentic AI, said Huang. NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.The NIM microservices will also be available with Nvidia Digits, a personal AI supercomputer that provides AI researchers, data scientists and students worldwide with access to the power of Nvidia Grace Blackwell. Project Digits features the new Nvidia GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.Making AI NIMbleHow AI gets smarterFoundation models neural networks trained on immense amounts of raw data are the building blocks for generative AI.Nvidia will release a pipeline of NIM microservices for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI. Use cases span large language models (LLMs), vision language models, image generation, speech, embedding models for retrieval-augmented generation (RAG), PDF extraction and computer vision.Making FLUX an Nvidia NIM microservice increases the rate at which AI can be deployed and experienced by more users, while delivering incredible performance, said Robin Rombach, CEO of Black Forest Labs, oin a statement.Nvidia today also announced the Llama Nemotron family of open models that provide high accuracy on a wide range of agentic tasks. The Llama Nemotron Nano model will be offered as a NIM microservice for RTX AI PCs and workstations, and excels at agentic AI tasks like instruction following, function calling, chat, coding and math. NIM microservices include the key components for running AI on PCs and are optimized for deployment across NVIDIA GPUs whether in RTX PCs and workstations or in thecloud.Developers and enthusiasts will be able to quickly download, set up and run these NIM microservices on Windows 11 PCs with Windows Subsystem for Linux (WSL).AI is driving Windows 11 PC innovation at a rapid rate, and Windows Subsystem for Linux (WSL) offers a great cross-platform environment for AI development on Windows 11 alongside Windows Copilot Runtime, said Pavan Davuluri, corporate vice president of Windows at Microsoft, in a statement. Nvidia NIM microservices, optimized for Windows PCs, give developers and enthusiasts ready-to-integrate AI models for their Windows apps, further accelerating deployment of AI capabilities to Windows users.The NIM microservices, running on RTX AI PCs, will be compatible with top AI development and agent frameworks, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow and LM Studio. Developers can connect applications and workflows built on these frameworks to AI models running NIM microservices through industry-standard endpoints, enabling them to use the latest technology with a unified interface across the cloud, data centers, workstations and PCs.Enthusiasts will also be able to experience a range of NIM microservices using an upcoming release of the Nvidia ChatRTX tech demo.Putting a Face on Agentic AINvidia AI BlueprintsTo demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, Nvidia today previewed Project R2X, a vision-enabled PC avatar that can put information at a users fingertips, assist with desktop apps and video conference calls, read and summarize documents, and more.The avatar is rendered using Nvidia RTX Neural Faces, a new generative AI algorithm that augments traditional rasterization with entirely generated pixels. The face is then animated by a new diffusion-based NVIDIA Audio2FaceTM-3D model that improves lip and tongue movement. R2X can be connected to cloud AI services such as OpenAIs GPT4o and xAIs Grok, and NIM microservices and AI Blueprints, such as PDF retrievers or alternative LLMs, via developer frameworks such as CrewAI, Flowise AI and Langflow.AI Blueprints Coming to PCA wafer full of Nvidia Blackwell chips.NIM microservices are also available to PC users through AI Blueprints reference AI workflows that can run locally on RTX PCs. With these blueprints, developers can create podcasts from PDF documents, generate stunning images guided by 3D scenes and more.The blueprint for PDF to podcast extracts text, images and tables from a PDF to create a podcast script that can be edited by users. It can also generate a full audio recording from the script using voices available in the blueprint or based on a users voice sample. In addition, users can have a real-time conversation with the AI podcast host to learn more. The blueprint uses NIM microservices like Mistral-Nemo-12B-Instruct for language, Nvidia Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever collection of microservices for PDF extraction.The AI Blueprint for 3D-guided generative AI gives artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects laid out in a 3D renderer like Blender to guide AI image generation. The artist can create 3D assets by hand or generate them using AI, place them in the scene and set the 3D viewport camera. Then, a pre-packaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.Nvidia NIM microservices and AI Blueprints will be available starting in February. NIM-ready RTX AI PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Yorumlar 0 hisse senetleri 140 Views
-
WWW.THEVERGE.COMAsus just announced the worlds first Thunderbolt 5 eGPUThis smoky black translucent box isnt a gaming PC instead, it might be the most powerful single-cable portable docking station ever conceived. When you plug your laptop or handheld into the just-announced 2025 Asus XG Mobile, it promises to add the power of Nvidias top-flight GeForce RTX 5090 mobile chip, and up to 140 watts of electricity, and two monitors, and a USB and SD-card-reading hub, and 5Gbps ethernet simultaneously.Thats because its the worlds first* Thunderbolt 5 external graphics card and one of the first Thunderbolt 5 docks, using the new 80 gigabit per second bidirectional link to do more things with a single cable than weve ever seen before. The 2025 XG Mobiles ports and a standard AC power connector, because the power supply lives inside. Photo by Antonio G. Di Benedetto / The VergeAnd if youre keeping score, Im pretty sure its also the first standards-based portable eGPU with an Nvidia graphics chip. While Asus last-gen XG Mobile also boasted up to an Nvidia 4090, you could only tap into that power with a proprietary port found only on a few Asus devices. (Its USB4 and Oculink rivals have mostly featured the AMD Radeon 7600M XT.) None of that makes it the most powerful eGPU out there, as I currently have no performance figures from Asus, and you can definitely go further with bigger docks that can fit desktop graphics cards rather than mobile GPUs. But Asus rep Anthony Spence tells me that the Thunderbolt 5 link does give you up to 64Gbps of bandwidth for its Nvidia graphics more than USB4 and tied with Oculink and Im wowed that Asus managed to fit all this and a 350W power supply (no external brick!) into a sub-2.2-pound package with a fold-out kickstand.Asus says its even 25 percent lighter and 18 percent smaller than the previous proprietary model. Its got HDMI 2.1 and DisplayPort 2.1 for video output and a pair of 10Gbps USB-A ports, in case youre wondering.Note that it comes with a little vertical stand, too. Image: AsusWhen it arrives later in Q1, it wont come cheap. Spence says the top-tier XG Mobile with an RTX 5090 laptop chip will cost $2199.99 meaning you could almost certainly cobble together a more powerful (but stationary) solution yourself. That said, Asus does plan to sell a lower-end $1,199.99 version with Nvidias mobile RTX 5070 Ti. Again, youre paying for compact power here rather than maximum bang for the buck.Yes, that Asus ROG logo is light-up, programmable RGB using the companys Aura Sync. You can also make out the top-mounted SD card receptacle. Photo by Antonio G. Di Benedetto / The VergeWhile it should work with any Thunderbolt 4 or USB4 laptop or handheld, including Asus own ROG Ally X, youll likely want the still-rare Thunderbolt 5 to get the full GPU bandwidth here. Finding a Thunderbolt 5 computer that doesnt already have a powerful discrete GPU might be tough, but perhaps some of 2025s thin-and-light laptops will seize this opportunity to double as potent travel desktops. *We are aware of one possible Thunderbolt 5 eGPU enclosure, to house a desktop graphics card, but that WinStar has barely even been detailed yet.0 Yorumlar 0 hisse senetleri 128 Views
-
WWW.THEVERGE.COMThe ROG Strix Scar 16 and 18 come with a lid that lights up and more RGBFollowing a teaser last month, Asus latest ROG Strix Scar gaming laptops have arrived and theyre leaning all the way into the gamer aesthetic. The 2025 Scar 16 and 18 come with RGB lights all the way around the bottom of the chassis as well as a user-programmable LED dot-matrix display on the lid, as seen on other ROG devices like Asus gaming phones. Beneath the flashy exterior, the Scar 16 and 18 can be maxed out with an Intel Core Ultra 9 275HX processor and Nvidia GeForce RTX 5090 GPU. It can also be configured with up to 64GB of DDR5-5600 RAM and a 2TB PCIe Gen4 SSD. The ROG Nebula HDR display comprises a 16:10 2.5K Mini LED panel with a peak brightness of 1,200 nits and a 240Hz refresh rate. There are two Thunderbolt 5 ports included, and the design allows for easy access to the bottom panel for component upgrades. 1/6 Photo by Antonio G. Di Benedetto / The Verge1/6 Photo by Antonio G. Di Benedetto / The VergeThe Strix Scar 16 and 18 have all the cooling tech youd expect from a gaming laptop of this caliber, including an end-to-end vapor chamber and sandwiched heatsink. Combined with the Conductonaut Extreme liquid metal treatment on the GPU and CPU, Asus claims that it can keep fan noise levels to a library-like 45dB, even during extended gaming sessions.On top of all that, the ROG Strix Scar comes with the aforementioned light show. Asus calls it AniMe Vision, and you can customize it to display personalized animations and sync it with any other AniMe Vision devices you own. Download some prebaked artwork or cook up your own using Asus pixel editor the choice is yours.The ROG Strix Scar starts at $2,599; Asus says its new gaming laptops will begin shipping in February.0 Yorumlar 0 hisse senetleri 126 Views
-
WWW.MARKTECHPOST.COMNVIDIA AI Introduces Cosmos World Foundation Model (WFM) Platform to Advance Physical AI DevelopmentThe development of Physical AIAI systems designed to simulate, predict, and optimize real-world physicshas long been constrained by significant challenges. Building accurate models often demands extensive computational resources and time, with simulations sometimes requiring days or weeks to produce actionable results. Additionally, the complexity of scaling these systems for practical use across industries such as manufacturing, healthcare, and robotics has further hindered their widespread adoption. These challenges underscore the need for tools that simplify model development while delivering efficiency and precision.NVIDIA has introduced the Cosmos World Foundation Model Platform to address these challenges head-on. This platform offers a unified framework that integrates advanced AI models, computational tools, and user-friendly features, all designed to streamline the development, simulation, and deployment of physical AI systems. It is fully optimized to work within NVIDIAs existing AI and GPU ecosystem, ensuring compatibility and scalability.Cosmos features pre-trained foundation models capable of simulating intricate physical processes, leveraging NVIDIAs state-of-the-art GPUs for high-performance computing. The platform is designed with accessibility in mind, providing tools for researchers and developers to build and test models efficiently. It supports critical applications across fields such as climate modeling, autonomous systems, and materials science, bridging the gap between research advancements and practical implementation.Technical Details and Benefits of the Cosmos PlatformAt its core, Cosmos utilizes pre-trained models that have been trained on extensive datasets encompassing diverse physical phenomena. These models incorporate NVIDIAs latest advancements in transformer architectures and high-scale training, enabling them to generalize across various domains with high accuracy. The platform integrates with NVIDIAs proprietary tools, such as CUDA-X AI and Omniverse, ensuring seamless workflow compatibility.One of Cosmos key features is its real-time simulation capability, powered by NVIDIAs GPUs. This significantly reduces the time required for iterative design and testing, making the platform especially valuable for industries such as automotive engineering. The modular architecture of Cosmos allows it to be integrated into existing workflows without requiring extensive modifications, further enhancing its usability.The platform also prioritizes model transparency and reliability. Through visualization tools, users can better understand and validate predictions, fostering trust in the results. Collaborative features enable multidisciplinary teams to work together effectively, an essential capability for addressing complex, cross-disciplinary challenges.ConclusionNVIDIAs Cosmos World Foundation Model Platform offers a practical and robust solution to many of the challenges faced in physical AI development. By combining advanced technology with a user-focused design, Cosmos supports efficient and accurate model development, fostering innovation across various fields. The platforms ability to deliver real-world resultssuch as improved energy efficiency and faster simulation timeshighlights its potential to transform industries. With Cosmos, NVIDIA is advancing the capabilities of physical AI, making it more accessible and impactful for researchers and practitioners alike.Check out the Details here. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Aswin Ak+ postsAswin AK is a consulting intern at MarkTechPost. He is pursuing his Dual Degree at the Indian Institute of Technology, Kharagpur. He is passionate about data science and machine learning, bringing a strong academic background and hands-on experience in solving real-life cross-domain challenges. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)0 Yorumlar 0 hisse senetleri 147 Views
-
TOWARDSAI.NETMy 6 Secret Tips for Getting an ML Job in 2025My 6 Secret Tips for Getting an ML Job in 2025 0 like January 7, 2025Share this postAuthor(s): Boris Meinardus Originally published on Towards AI. Getting a machine learning job in 2025 feels almost impossible at least, if you dont know what you are doing!This member-only story is on us. Upgrade to access all of Medium.Nowadays, I somehow managed to be an AI researcher at one of the best AI startups in the world!But to get there, I repeated the same common mistakes over and over until I learned that there are many techniques no one really speaks about!So, in this blog post, I will share my six secret tips, which I hope will show you that in 2025, there are many different ways to get a job in ML.Ask yourself this: How do you show your skill? How do you show that you can actually provide value to a project?A friend of mine was working on a personal project using the huggingface transformers library and was wondering why his code was so slow or why it was taking up so much memory. He then dug deeper into the huggingface code, and he actually found something that looked like a bug! So he did a lot of testing to see if he was just mistaking this for a bug or if it really was a bug. When he found out that his fix actually improved the performance, he knew he had something to share Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Yorumlar 0 hisse senetleri 148 Views
-
TOWARDSAI.NETBuilding Multimodal RAG Application #7: Multimodal RAG with Multimodal LangChainBuilding Multimodal RAG Application #7: Multimodal RAG with Multimodal LangChain 1 like January 6, 2025Share this postLast Updated on January 7, 2025 by Editorial TeamAuthor(s): Youssef Hosni Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Multimodal retrieval-augmented generation (RAG) is transforming how AI applications handle complex information by merging retrieval and generation capabilities across diverse data types, such as text, images, and video.Unlike traditional RAG, which typically focuses on text-based retrieval and generation, multimodal RAG systems can pull in relevant content from both text and visual sources to generate more contextually rich, comprehensive responses.This article, the seventh installment in our Building Multimodal RAG Applications series, dives into building multimodal RAG systems with LangChain.We will wrap all the modules created in the previous articles in LangChain chains using RunnableParallel, RunnablePassthrough, and RunnableLambda methods from LangChain.This article is the seventh in the ongoing series of Building Multimodal RAG Application:Introduction to Multimodal RAG Applications (Published)Multimodal Embeddings (Published)Multimodal RAG Application Architecture (Published)Processing Videos for Multimodal RAG (Published)Multimodal Retrieval from Vector Stores (Published)Large Vision Language Models (LVLMs) (Published)Multimodal RAG with Multimodal LangChain (You are here!)Putting it All Together! Building Multimodal RAG Application (Coming soon!)You can find the codes and datasets used in this series in this GitHub RepoSetting Up Working EnvironmentInvoke the Multimodal RAG System with a QueryMultimodal RAG System Showing Retrieved Image/FrameMost insights I share in Medium have Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Yorumlar 0 hisse senetleri 170 Views
-
WWW.IGN.COMHorizon Zero Dawn Film ConfirmedSony has announced a movie adaptation of Horizon Zero Dawn.The collaboration between PlayStation Studios and Columbia Pictures was announced during Sony's CES 2025 press conference. Columbia Pictures produced the successful 2022 Uncharted movie starring Tom Holland as Nathan Drake and Mark Wahlberg as Victor Sullivan.Horizon Zero Dawn is Guerrilla Games' hugely popular post-apocalyptic adventure starring machine hunter Aloy. No timeframe for the film's release was announced.Speaking on-stage at CES, Asad Qizilbash, head of PlayStation Productions, said: "Columbia Pictures and PlayStation Productions are at the early stages of developing a film adaptation of the award-winning Horizon Zero Dawn."Just imagine, Aloy's beloved origin story set in a vibrant, far future world filled with the giant machines, brought to you for the first time on the big screen."Horizon Zero Dawn is yet another PlayStation game to get a movie adaptation.During the same press conference, Sony announced a film adaptation of Helldivers 2 and an anime series adaptation of Ghost of Tsushima.Developing...Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.0 Yorumlar 0 hisse senetleri 144 Views
-
WWW.IGN.COMGhost of Tsushima Anime Coming Exclusively to Crunchyroll in 2027A Ghost of Tsushima anime is in the works and set to launch exclusively on Sony-owned Crunchyroll in 2027.The new series, the first anime adaptation of a PlayStation Studios game, is not only based generally on Sucker Punchs Ghost of Tsushima, but specifically the Legends co-op multiplayer portion.Its produced in collaboration with Aniplex, the studio behind the likes of Demon Slayer: Kimetsu no Yaiba, Solo Leveling, and Sword Art Online, and is directed by Takanobu Mizuno, with Gen Urobuchi (NITRO PLUS) on story composition and animation by Kamikaze Douga. Sony Music will serve as the strategic music and soundtrack partner for the series.Ghost of Tsushima is the first PlayStation video game to get an anime adaptation.Ghost of Tsushima is an open-world samurai action-adventure game set in feudal Japan. Its saw enormous success upon its launch on PlayStation 4 in 2020, then on PlayStation 5 a year later and on PC in 2024. Across all platforms, Ghost of Tsushima has sold over 13 million copies.The Ghost of Tsushima anime will offer fans an exciting new way to experience the game in an anime style that will be bold and groundbreaking, commented Rahul Purini, president of Crunchyroll.Having already proven the immense quality and versatility of our gaming properties across multiple successful film and television projects, we couldnt be more excited to announce our first ever anime adaptation, added Asad Qizilbash, head of PlayStation Productions.Ghost of Tsushimas rich, immersive world and its fantastical Legends mode based on Japanese mythology provide the perfect canvas for this project, and Aniplex is the perfect partner to translate Sucker Punch Productions hit video game into a stunning new anime series.Thats all we have for now. Expect more details, including the creative team and cast, at some point in the future. Meanwhile, a Ghost of Tsushima movie is said to be in the works with John Wick director Chad Stahelski at the helm, although we havent had an update in quite some time. As for Sucker Punch, its working on the sequel Ghost of Yotei for release on PS5 at some point in 2025.The Ghost of Tsushima anime is the latest adaptation effort from PlayStation Productions, which was founded in 2019 to take PlayStation games into the worlds of film and TV. Since then films based on Uncharted and Gran Turismo, and TV series based on Twisted Metal and The Last of Us have all been released to varying degrees of success. The Last of Us Season 2 is due out later this year.Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.0 Yorumlar 0 hisse senetleri 141 Views
-
WWW.CNET.COMAlienware Rediscovers Its Enthusiast Roots With Area-51 Desktop and LaptopsI still remember the first time I saw an Alienware Area-51 gaming desktop in person back in 2004. This huge, black plastic tower -- with giant vents at the front shaped like the eyes on its alien head logo -- was tremendous, and I wanted one desperately (even if it would've eaten up at least a quarter of the floor space of my New York apartment at the time). The Area-51 desktop was discontinued in 2017, but Alienware decided the time was right to reintroduce the line with a full-size 80L tower (though not the giant plastic alien head chassis). And I still want one.Announced alongside new Nvidia GeForce RTX 50-series graphics cards at CES 2025, the redesign is squarely aimed at enthusiasts who want a gaming PC with a future. Alienware says it can handle more than 600 watts of dedicated graphics power and up to 280 watts of dedicated processing power (up to an Intel Core Ultra 9 285K). It uses standard components so everything can be serviced and replaced. Plus, its size can accommodate the largest components available today with room to spare. Josh Goldman/CNETPart of the reason you'll be able to easily upgrade in the future is its positive pressure airflow system. It uses a combination of fans of different sizes combined with gaskets inside the chassis that create a greater internal air pressure than outside the case that helps force hot air through the rear vents without the use of exhaust fans. Liquid cooling is also an option.The new Alienware Area-51 desktop will be available later in the first quarter of 2025 with an initial high-end configuration priced at $4,500. Lower-end configurations will be available later in the year. Josh Goldman/CNETAlienware also introduced new Area-51 gaming laptops. Available in 16- and 18-inch models, they'll feature next-gen Nvidia graphics and up to an Intel Core Ultra 9 275HX CPU. The anodized aluminum bodies have a Liquid Teal color-shifting iridescent finish. The rear exhaust shelf is translucent and has "lighting animations that imitate the unpredictable motions of the Aurora Borealis." Josh Goldman/CNETOn the bottoms are clear Gorilla Glass windows so you can see the components, but Alienware also used fans with RGB lights that shine through the glass as well as up through the keyboard. The Area-51 laptops support the highest total power ceiling in a gaming laptop, Alienware says. For total graphics power, it can go up to 175 watts and up to a 105 watts thermal design profile toward processors simultaneously.Like the desktop, the laptops will arrive later in the first quarter with a high-end configuration at launch going for $3,200. Entry-level configs will be available later starting at $2,000.0 Yorumlar 0 hisse senetleri 144 Views