• scriptstdstring setlocale multithread issue (crashes) and proposed fix
    gamedev.net
    Hi,I have noticed that the string to float conversion make multithreaded programs crash when used intensively in multiple threads in apps that are using non-C locale. This is due to the fact that setlocale is not thread safe at all.Please find below a proposed patch to overcome this issue (using uselocal instead, with a custom implementation for windows as it does not exist there):Index: scriptstdstring/scriptstdstring.cpp==============
    0 Commentaires ·0 Parts ·110 Vue
  • 8th Wall Now Works Across All Major iOS Apps Including Instagram, Snapchat and More
    gamedev.net
    After that, they also added AI to it, and now theyre testing with Trivver to see how they can more effectively squeeze in innovative advertising
    0 Commentaires ·0 Parts ·115 Vue
  • How do you give purpose to a game?
    gamedev.net
    I've worked on my game for about a month in the public, only to realize that it has no purpose. There's no actual meat to the burger, if that makes sense, it's just all the other toppings you would put on a burger, and while good, isn't really what you want from a burger. So, I come to ask, how do you give purpose to a game? Unlike a burger, games are very complex and require a lot more meat than just beef. I'm open to anything that could help me.
    0 Commentaires ·0 Parts ·125 Vue
  • Tidy Multiplayer Design
    gamedev.net
    Hello everyone,I've been working on a game project: multiplayer fighter, which has client-server (authoritative) networking.It works, but was written in a bit of salad code (worsen by the fact that was migrated from Unity).I'm trying to tidy things up a bit, and given the client-server nature, the main flow I have goes like this (I'm not writing classes here, but operations to perform regarding players, in order of execution):Players Localread input
    0 Commentaires ·0 Parts ·115 Vue
  • 0 Commentaires ·0 Parts ·196 Vue
  • 0 Commentaires ·0 Parts ·207 Vue
  • Unleash the Dragonborn: Elder Scrolls V: Skyrim Special Edition Joins GeForce NOW
    blogs.nvidia.com
    Hey, you. Youre finally awake.Its the summer of Elder Scrolls whether a seasoned Dragonborn or a new adventurer, dive into the legendary world of Tamriel this GFN Thursday as The Elder Scrolls V: Skyrim Special Edition joins the cloud.Epic adventures await, along with nine new games joining the GeForce NOW library this week.Plus make sure to catch the GeForce NOW Summer Sale for 50% off new Ultimate and Priority memberships.Unleash the DragonbornTaking an arrow to the knee wont stop gamers from questing in the cloud.Experience the legendary adventures, breathtaking landscapes and immersive storytelling of the iconic role-playing game The Elder Scrolls V: Skyrim Special Edition from Bethesda Game Studios now accessible on any device from the cloud. Become the Dragonborn and defeat Alduin the World-Eater, a dragon prophesied to destroy the world.Explore a vast landscape, complete quests and improve skills to develop characters in the open world of Skyrim. The Special Edition includes add-ons with all-new features, including remastered art and effects. It also brings the adventure of Bethesda Game Studios creations, including new quests, environments, characters, dialogue, armor and weapons.Get ready to embark on unforgettable quests, battle fearsome foes and uncover the rich lore of the Elder Scrolls universe, all with the power and convenience of GeForce NOW. Fus Ro Dah with an Ultimate membership to stream at up to 4K resolution and 120 frames per second with up to eight-hour gaming sessions for the ultimate immersive experience throughout the realms of Tamriel.All Hands on DeckGet those sea legs ready for a reward.Wargaming is bringing back an in-game event exclusively for GeForce NOW members this week.Through Tuesday, July 30, members who complete the quest while streaming World of Warships can earn up to five GeForce NOW one-day Priority codes one for each day of the challenge. Aspiring admirals can learn more on the World of Warships blog and social channels.Shiny and NewRendezvous with death.Take on classic survival horror in CONSCRIPT from Jordan Mochi and Team17. Inspired by legendary games in the genre, the game is set in 1916 during the Great War. CONSCRIPT blends all the punishing mechanics of older horror games into a cohesive, tense and unique experience. Play as a French soldier searching for his missing-in-action brother during the Battle of Verdun. Search through twisted trenches, navigate overrun forts and cross no-mans-land to find him.Heres the full list of new games this week:Cataclismo (New release on Steam, July 22CONSCRIPT (New release on Steam, July 23)F1 Manager 2024 (New release on Steam, July 23)EARTH DEFENSE FORCE 6 (New release on Steam, July 25)The Elder Scrolls V: Skyrim (Steam)The Elder Scrolls V: Skyrim Special Edition (Steam, Epic Games Store and Xbox, available on PC Game Pass)Gang Beasts (Steam and Xbox, available on PC Game Pass)Kingdoms and Castles (Steam)The Settlers: New Allies (Steam)What are you planning to play this weekend? Let us know on X or in the comments below. NVIDIA GeForce NOW (@NVIDIAGFN) July 24, 2024
    0 Commentaires ·0 Parts ·145 Vue
  • Demystifying AI-Assisted Artistry With Adobe Apps Using NVIDIA RTX
    blogs.nvidia.com
    Editors note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.Adobe Creative Cloud applications, which tap NVIDIA RTX GPUs, are designed to enhance the creativity of users, empowering them to work faster and focus on their craft.These tools seamlessly integrate into existing creator workflows, enabling greater productivity and delivering power and precision.Look to the LightGenerative AI creates new data in forms such as images or text by learning from existing data. It effectively visualizes and generates content to match what a user describes and helps open up fresh avenues for creativity.Adobe Firefly is Adobes family of creative generative AI models that offer new ways to ideate and create while assisting creative workflows using generative AI. Theyre designed to be safe for commercial use and were trained, using NVIDIA GPUs, on licensed content, like Adobe Stock Images, and public domain content where copyright has expired.Firefly features are integrated in Adobes most popular creative apps.Adobe Photoshop features the Generative Fill tool, which uses simple description prompts to easily add content from images. With the latest Reference Image feature currently in beta, users can also upload a sample image to get image results closer to their desired output.Use Generative Fill to add content and Reference Image to refine it.Generative Expand allows artists to extend the border of their image with the Crop tool, filling in bigger canvases with new content that automatically blends in with the existing image.Bigger canvas? Not a problem.RTX-accelerated Neural Filters, such as Photo Restoration, enable complex adjustments such as colorizing black-and-white photos and performing style transfers using AI. The Smart Portrait filter, which allows non-destructive editing with filters, is based on work from NVIDIA Research.The brand-new Generative Shape Fill (beta) in Adobe Illustrator, powered by the latest Adobe Firefly Vector Model, allows users to accelerate design workflows by quickly filling shapes with detail and color in their own styles. With Generative Shape Fill, designers can easily match the style and color of their own artwork to create a wide variety of editable and scalable vector graphic options.Generative AI.Adobe Illustrators Generative Recolor feature lets creators type in a text prompt to explore custom color palettes and themes for their vector artwork in seconds.Color us impressed.NVIDIA will continue working with Adobe to support advanced generative AI models, with a focus on deep integration into the apps the worlds leading creators use.Making Moves on VideoAdobe Premiere Pro is one of the most popular and powerful video editing solutions.Its Enhance Speech tool, accelerated by RTX, uses AI to remove unwanted noise and improve the quality of dialogue clips so they sound professionally recorded. Its up to 4.5x faster on RTX PCs.Adobe Premiere Pros AI-powered Enhance Speech tool removes unwanted noise and improves dialogue quality.Auto Reframe, another Adobe Premiere feature, uses GPU acceleration to identify and track the most relevant elements in a video, and intelligently reframes video content for different aspect ratios. Scene Edit Detection automatically finds the original edit points in a video, a necessary step before the video editing stage begins.Visual EffectsSeparating a foreground object from a background is a crucial step in many visual effects and compositing workflows.Adobe After Effects has a new feature that uses a matte to isolate an object, enabling capabilities including background replacement and the selective application of effects to the foreground.Using the Roto Brush tool, artists can draw strokes on representative areas of the foreground and background elements. After Effects uses that information to create a segmentation boundary between the foreground and background elements, delivering cleaner cutouts with fewer clicks.Creating 3D Product ShotsThe Substance 3D Collection is Adobes solution for 3D material authoring, texturing and rendering, enabling users to rapidly create stunningly photorealistic 3D content, including models, materials and lighting.Visualizing products and designs in the context of a space is compelling, but it can be time-consuming to find the right environment for the objects to live in. Substance 3D Stagers Generative Background feature, powered by Adobe Firefly, solves this issue by letting artists quickly explore generated backgrounds to composite 3D models.Once an environment is selected, Stager can automatically match the perspective and lighting to the generated background.Material Authoring With AIAdobe Substance 3D Sampler, also part of the Substance 3D Collection, is designed to transform images of surfaces and objects into photorealistic physically based rendering (PBR) materials, 3D models and high-dynamic range environment lights. With the recent introduction of new generative workflows powered by Adobe Firefly, Sampler is making it easier than ever for artists to explore variations when creating materials for everything from product visualization projects to the latest AAA games.Samplers Text-to-Texture feature allows users to generate tiled images from detailed text prompts. These generated images can then be edited and transformed into photorealistic PBR materials using the machine learning-powered Image-to-Material feature or any Sampler filter.Image-to-Texture similarly enables the creation of tiled textures from reference images, providing an alternate way to prompt and generate variations from existing visual content.Adobe 3D Samplers Image-to-Texture feature.Samplers Text-to-Pattern feature uses text prompts to generate tiling patterns, which can be used as base colors or inputs for various filters, such as the Cloth Weave filter for creating original fabric materials.All of these generative AI features in the Substance 3D Collection, supercharged with RTX GPUs, are designed to help 3D creators ideate and create faster.Photo-tastic FeaturesAdobe Lightrooms AI-powered Raw Details feature produces crisp detail and more accurate renditions of edges, improves color rendering and reduces artifacts, enhancing the image without changing its original resolution. This feature is handy for large displays and prints, where fine details are visible.Enhance, enhance, enhance.Super Resolution helps create an enhanced image with similar results as Raw Details but with 2x the linear resolution. This means that the enhanced image will have 2x the width and height of the original image or 4x the total pixel count. This is especially useful for increasing the resolution of cropped imagery.For faster editing, AI-powered, RTX-accelerated masking tools like Select Subject, which isolates people from an image, and Select Sky, which captures skies, enable users to create complex masks with the click of a button.Visit Adobes AI features page for a complete list of AI features using RTX.Looking for more AI-powered content creation apps? Consider NVIDIA Broadcast, which transforms any room into a home studio, free for RTX GPU owners.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Commentaires ·0 Parts ·149 Vue
  • How NVIDIA AI Foundry Lets Enterprises Forge Custom Generative AI Models
    blogs.nvidia.com
    Businesses seeking to harness the power of AI need customized models tailored to their specific industry needs.NVIDIA AI Foundry is a service that enables enterprises to use data, accelerated computing and software tools to create and deploy custom models that can supercharge their generative AI initiatives.Just as TSMC manufactures chips designed by other companies, NVIDIA AI Foundry provides the infrastructure and tools for other companies to develop and customize AI models using DGX Cloud, foundation models, NVIDIA NeMo software, NVIDIA expertise, as well as ecosystem tools and support.The key difference is the product: TSMC produces physical semiconductor chips, while NVIDIA AI Foundry helps create custom models. Both enable innovation and connect to a vast ecosystem of tools and partners.Enterprises can use AI Foundry to customize NVIDIA and open community models, including the new Llama 3.1 collection, as well as NVIDIA Nemotron, CodeGemma by Google DeepMind, CodeLlama, Gemma by Google DeepMind, Mistral, Mixtral, Phi-3, StarCoder2 and others.Industry Pioneers Drive AI InnovationIndustry leaders Amdocs, Capital One, Getty Images, KT, Hyundai Motor Company, SAP, ServiceNow and Snowflake are among the first using NVIDIA AI Foundry. These pioneers are setting the stage for a new era of AI-driven innovation in enterprise software, technology, communications and media.Organizations deploying AI can gain a competitive edge with custom models that incorporate industry and business knowledge, said Jeremy Barnes, vice president of AI Product at ServiceNow. ServiceNow is using NVIDIA AI Foundry to fine-tune and deploy models that can integrate easily within customers existing workflows.The Pillars of NVIDIA AI FoundryNVIDIA AI Foundry is supported by the key pillars of foundation models, enterprise software, accelerated computing, expert support and a broad partner ecosystem.Its software includes AI foundation models from NVIDIA and the AI community as well as the complete NVIDIA NeMo software platform for fast-tracking model development.The computing muscle of NVIDIA AI Foundry is NVIDIA DGX Cloud, a network of accelerated compute resources co-engineered with the worlds leading public clouds Amazon Web Services, Google Cloud and Oracle Cloud Infrastructure. With DGX Cloud, AI Foundry customers can develop and fine-tune custom generative AI applications with unprecedented ease and efficiency, and scale their AI initiatives as needed without significant upfront investments in hardware. This flexibility is crucial for businesses looking to stay agile in a rapidly changing market.If an NVIDIA AI Foundry customer needs assistance, NVIDIA AI Enterprise experts are on hand to help. NVIDIA experts can walk customers through each of the steps required to build, fine-tune and deploy their models with proprietary data, ensuring the models tightly align with their business requirements.NVIDIA AI Foundry customers have access to a global ecosystem of partners that can provide a full range of support. Accenture, Deloitte, Infosys, Tata Consultancy Services and Wipro are among the NVIDIA partners that offer AI Foundry consulting services that encompass design, implementation and management of AI-driven digital transformation projects. Accenture is first to offer its own AI Foundry-based offering for custom model development, the Accenture AI Refinery framework.Additionally, service delivery partners such as Data Monsters, Quantiphi, Slalom and SoftServe help enterprises navigate the complexities of integrating AI into their existing IT landscapes, ensuring that AI applications are scalable, secure and aligned with business objectives.Customers can develop NVIDIA AI Foundry models for production using AIOps and MLOps platforms from NVIDIA partners, including ActiveFence, AutoAlign, Cleanlab, DataDog, Dataiku, Dataloop, DataRobot, Deepchecks, Domino Data Lab, Fiddler AI, Giskard, New Relic, Scale, Tumeryk and Weights & Biases.Customers can output their AI Foundry models as NVIDIA NIM inference microservices which include the custom model, optimized engines and a standard API to run on their preferred accelerated infrastructure.Inferencing solutions like NVIDIA TensorRT-LLM deliver improved efficiency for Llama 3.1 models to minimize latency and maximize throughput. This enables enterprises to generate tokens faster while reducing total cost of running the models in production. Enterprise-grade support and security is provided by the NVIDIA AI Enterprise software suite.NVIDIA NIM and TensorRT-LLM minimize inference latency and maximize throughput for Llama 3.1 models to generate tokens faster.The broad range of deployment options includes NVIDIA-Certified Systems from global server manufacturing partners including Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro, as well as cloud instances from Amazon Web Services, Google Cloud and Oracle Cloud Infrastructure.Additionally, Together AI, a leading AI acceleration cloud, today announced it will enable its ecosystem of over 100,000 developers and enterprises to use its NVIDIA GPU-accelerated inference stack to deploy Llama 3.1 endpoints and other open models on DGX Cloud.Every enterprise running generative AI applications wants a faster user experience, with greater efficiency and lower cost, said Vipul Ved Prakash, founder and CEO of Together AI. Now, developers and enterprises using the Together Inference Engine can maximize performance, scalability and security on NVIDIA DGX Cloud.NVIDIA NeMo Speeds and Simplifies Custom Model DevelopmentWith NVIDIA NeMo integrated into AI Foundry, developers have at their fingertips the tools needed to curate data, customize foundation models and evaluate performance. NeMo technologies include:NeMo Curator is a GPU-accelerated data-curation library that improves generative AI model performance by preparing large-scale, high-quality datasets for pretraining and fine-tuning.NeMo Customizer is a high-performance, scalable microservice that simplifies fine-tuning and alignment of LLMs for domain-specific use cases.NeMo Evaluator provides automatic assessment of generative AI models across academic and custom benchmarks on any accelerated cloud or data center.NeMo Guardrails orchestrates dialog management, supporting accuracy, appropriateness and security in smart applications with large language models to provide safeguards for generative AI applications.Using the NeMo platform in NVIDIA AI Foundry, businesses can create custom AI models that are precisely tailored to their needs. This customization allows for better alignment with strategic objectives, improved accuracy in decision-making and enhanced operational efficiency. For instance, companies can develop models that understand industry-specific jargon, comply with regulatory requirements and integrate seamlessly with existing workflows.As a next step of our partnership, SAP plans to use NVIDIAs NeMo platform to help businesses to accelerate AI-driven productivity powered by SAP Business AI, said Philipp Herzig, chief AI officer at SAP.Enterprises can deploy their custom AI models in production with NVIDIA NeMo Retriever NIM inference microservices. These help developers fetch proprietary data to generate knowledgeable responses for their AI applications with retrieval-augmented generation (RAG).Safe, trustworthy AI is a non-negotiable for enterprises harnessing generative AI, with retrieval accuracy directly impacting the relevance and quality of generated responses in RAG systems, said Baris Gultekin, Head of AI, Snowflake. Snowflake Cortex AI leverages NeMo Retriever, a component of NVIDIA AI Foundry, to further provide enterprises with easy, efficient, and trusted answers using their custom data.Custom Models Drive Competitive AdvantageOne of the key advantages of NVIDIA AI Foundry is its ability to address the unique challenges faced by enterprises in adopting AI. Generic AI models can fall short of meeting specific business needs and data security requirements. Custom AI models, on the other hand, offer superior flexibility, adaptability and performance, making them ideal for enterprises seeking to gain a competitive edge.Learn more about how NVIDIA AI Foundry allows enterprises to boost productivity and innovation.
    0 Commentaires ·0 Parts ·153 Vue
  • AI, Go Fetch! New NVIDIA NeMo Retriever Microservices Boost LLM Accuracy and Throughput
    blogs.nvidia.com
    Generative AI applications have little, or sometimes negative, value without accuracy and accuracy is rooted in data.To help developers efficiently fetch the best proprietary data to generate knowledgeable responses for their AI applications, NVIDIA today announced four new NVIDIA NeMo Retriever NIM inference microservices.Combined with NVIDIA NIM inference microservices for the Llama 3.1 model collection, also announced today, NeMo Retriever NIM microservices enable enterprises to scale to agentic AI workflows where AI applications operate accurately with minimal intervention or supervision while delivering the highest accuracy retrieval-augmented generation, or RAG.NeMo Retriever allows organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses for AI applications using RAG. In essence, the production-ready microservices enable highly accurate information retrieval for building highly accurate AI applications.For example, NeMo Retriever can boost model accuracy and throughput for developers creating AI agents and customer service chatbots, analyzing security vulnerabilities or extracting insights from complex supply chain information.NIM inference microservices enable high-performance, easy-to-use, enterprise-grade inferencing. And with NeMo Retriever NIM microservices, developers can benefit from all of this superpowered by their data.These new NeMo Retriever embedding and reranking NIM microservices are now generally available:NV-EmbedQA-E5-v5, a popular community base embedding model optimized for text question-answering retrievalNV-EmbedQA-Mistral7B-v2, a popular multilingual community base model fine-tuned for text embedding for high-accuracy question answeringSnowflake-Arctic-Embed-L, an optimized community model, andNV-RerankQA-Mistral4B-v3, a popular community base model fine-tuned for text reranking for high-accuracy question answering.They join the collection of NIM microservices easily accessible through the NVIDIA API catalog.Embedding and Reranking ModelsNeMo Retriever NIM microservices comprise two model types embedding and reranking with open and commercial offerings that ensure transparency and reliability.Example RAG pipeline using NVIDIA NIM microservices for Llama 3.1 and NeMo Retriever embedding and reranking NIM microservices for a customer service AI chatbot application.An embedding model transforms diverse data such as text, images, charts and video into numerical vectors, stored in a vector database, while capturing their meaning and nuance. Embedding models are fast and computationally less expensive than traditional large language models, or LLMs.A reranking model ingests data and a query, then scores the data according to its relevance to the query. Such models offer significant accuracy improvements while being computationally complex and slower than embedding models.NeMo Retriever provides the best of both worlds. By casting a wide net of data to be retrieved with an embedding NIM, then using a reranking NIM to trim the results for relevancy, developers tapping NeMo Retriever can build a pipeline that ensures the most helpful, accurate results for their enterprise.With NeMo Retriever, developers get access to state-of-the-art open, commercial models for building text Q&A retrieval pipelines that provide the highest accuracy. When compared with alternate models, NeMo Retriever NIM microservices provided 30% fewer inaccurate answers for enterprise question answering.Comparison of NeMo Retriever embedding NIM and embedding plus reranking NIM microservices performance versus lexical search and an alternative embedder.Top Use CasesFrom RAG and AI agent solutions to data-driven analytics and more, NeMo Retriever powers a wide range of AI applications.The microservices can be used to build intelligent chatbots that provide accurate, context-aware responses. They can help analyze vast amounts of data to identify security vulnerabilities. They can assist in extracting insights from complex supply chain information. And they can boost AI-enabled retail shopping advisors that offer natural, personalized shopping experiences, among other tasks.NVIDIA AI workflows for these use cases provide an easy, supported starting point for developing generative AI-powered technologies.Dozens of NVIDIA data platform partners are working with NeMo Retriever NIM microservices to boost their AI models accuracy and throughput.DataStax has integrated NeMo Retriever embedding NIM microservices in its Astra DB and Hyper-Converged platforms, enabling the company to bring accurate, generative AI-enhanced RAG capabilities to customers with faster time to market.Cohesity will integrate NVIDIA NeMo Retriever microservices with its AI product, Cohesity Gaia, to help customers put their data to work to power insightful, transformative generative AI applications through RAG.Kinetica will use NVIDIA NeMo Retriever to develop LLM agents that can interact with complex networks in natural language to respond more quickly to outages or breaches turning insights into immediate action.NetApp is collaborating with NVIDIA to connect NeMo Retriever microservices to exabytes of data on its intelligent data infrastructure. Every NetApp ONTAP customer will be able to seamlessly talk to their data to access proprietary business insights without having to compromise the security or privacy of their data.NVIDIA global system integrator partners including Accenture, Deloitte, Infosys, LTTS, Tata Consultancy Services, Tech Mahindra and Wipro, as well as service delivery partners Data Monsters, EXLService (Ireland) Limited, Latentview, Quantiphi, Slalom, SoftServe and Tredence, are developing services to help enterprises add NeMo Retriever NIM microservices into their AI pipelines.Use With Other NIM MicroservicesNeMo Retriever NIM microservices can be used with NVIDIA Riva NIM microservices, which supercharge speech AI applications across industries enhancing customer service and enlivening digital humans.New models that will soon be available as Riva NIM microservices include: FastPitch and HiFi-GAN for text-to-speech applications; Megatron for multilingual neural machine translation; and the record-breaking NVIDIA Parakeet family of models for automatic speech recognition.NVIDIA NIM microservices can be used all together or separately, offering developers a modular approach to building AI applications. In addition, the microservices can be integrated with community models, NVIDIA models or users custom models in the cloud, on premises or in hybrid environments providing developers with further flexibility.NVIDIA NIM microservices are available at ai.nvidia.com. Enterprises can deploy AI applications in production with NIM through the NVIDIA AI Enterprise software platform.NIM microservices can run on customers preferred accelerated infrastructure, including cloud instances from Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure, as well as NVIDIA-Certified Systems from global server manufacturing partners including Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.NVIDIA Developer Program members will soon be able to access NIM for free for research, development and testing on their preferred infrastructure.Learn more about the latest in generative AI and accelerated computing by joining NVIDIA at SIGGRAPH, the premier computer graphics conference, running July 28-Aug. 1 in Denver.See notice regarding software product information.
    0 Commentaires ·0 Parts ·150 Vue