0 Comments
·0 Shares
·176 Views
-
Sustainable Strides: How AI and Accelerated Computing Are Driving Energy Efficiencyblogs.nvidia.comAI and accelerated computing twin engines NVIDIA continuously improves are delivering energy efficiency for many industries.Its progress the wider community is starting to acknowledge.Even if the predictions that data centers will soon account for 4% of global energy consumption become a reality, AI is having a major impact on reducing the remaining 96% of energy consumption, said a report from Lisbon Council Research, a nonprofit formed in 2003 that studies economic and social issues.The article from the Brussels-based research group is among a handful of big-picture AI policy studies starting to emerge. It uses Italys Leonardo supercomputer, accelerated with nearly 14,000 NVIDIA GPUs, as an example of a system advancing work in fields from automobile design and drug discovery to weather forecasting.Energy-efficiency gains over time for the most efficient supercomputer on the TOP500 list. Source: TOP500.orgWhy Accelerated Computing Is Sustainable ComputingAccelerated computing uses the parallel processing of NVIDIA GPUs to do more work in less time. As a result, it consumes less energy than general-purpose servers that employ CPUs built to handle one task at a time.Thats why accelerated computing is sustainable computing.Accelerated systems use parallel processing on GPUs to do more work in less time, consuming less energy than CPUs.The gains are even greater when accelerated systems apply AI, an inherently parallel form of computing thats the most transformative technology of our time.When it comes to frontier applications like machine learning or deep learning, the performance of GPUs is an order of magnitude better than that of CPUs, the report said.By transitioning from CPU-only operations to GPU-accelerated systems, HPC and AI workloads can save over 40 terawatt-hours of energy annually, equivalent to the electricity needs of nearly 5 million U.S. homes.NVIDIA offers a combination of GPUs, CPUs, and DPUs tailored to maximize energy efficiency with accelerated computing.User Experiences With Accelerated AIUsers worldwide are documenting energy-efficiency gains with AI and accelerated computing.In financial services, Murex a Paris-based company with a trading and risk-management platform used daily by more than 60,000 people tested the NVIDIA Grace Hopper Superchip. On its workloads, the CPU-GPU combo delivered a 4x reduction in energy consumption and a 7x reduction in time to completion compared with CPU-only systems (see chart below).On risk calculations, Grace is not only the fastest processor, but also far more power-efficient, making green IT a reality in the trading world, said Pierre Spatz, head of quantitative research at Murex.In manufacturing, Taiwan-based Wistron built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests to improve operations at the site. It used NVIDIA Omniverse, a platform for industrial digitization, with a surrogate model, a version of AI that emulates simulations.The digital twin, linked to thousands of networked sensors, enabled Wistron to increase the facilitys overall energy efficiency by up to 10%. That amounts to reducing electricity consumption by 120,000 kWh per year and carbon emissions by a whopping 60,000 kilograms.Up to 80% Fewer Carbon EmissionsThe RAPIDS Accelerator for Apache Spark can reduce the carbon footprint for data analytics, a widely used form of machine learning, by as much as 80% while delivering 5x average speedups and 4x reductions in computing costs, according to a recent benchmark.Thousands of companies about 80% of the Fortune 500 use Apache Spark to analyze their growing mountains of data. Companies using NVIDIAs Spark accelerator include Adobe, AT&T and the U.S. Internal Revenue Service.In healthcare, Insilico Medicine discovered and put into phase 2 clinical trials a drug candidate for a relatively rare respiratory disease, thanks to its NVIDIA-powered AI platform.Using traditional methods, the work would have cost more than $400 million and taken up to six years. But with generative AI, Insilico hit the milestone for one-tenth of the cost in one-third of the time.This is a significant milestone not only for us, but for everyone in the field of AI-accelerated drug discovery, said Alex Zhavoronkov, CEO of Insilico Medicine.This is just a sampler of results that users of accelerated computing and AI are pursuing at companies such as Amgen, BMW, Foxconn, PayPal and many more.Speeding Science With Accelerated AIIn basic research, the National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energys lead facility for open science, measured results on a server with four NVIDIA A100 Tensor Core GPUs compared with dual-socket x86 CPU servers across four of its key high-performance computing and AI applications.Researchers found that the apps, when accelerated with the NVIDIA A100 GPUs, saw energy efficiency rise 5x on average (see below). One application, for weather forecasting, logged gains of nearly 10x.Scientists and researchers worldwide depend on AI and accelerated computing to achieve high performance and efficiency.In a recent ranking of the worlds most energy-efficient supercomputers, known as the Green500, NVIDIA-powered systems swept the top six spots, and 40 of the top 50.Underestimated Energy SavingsThe many gains across industries and science are sometimes overlooked in forecasts that extrapolate only the energy consumption of training the largest AI models. That misses the benefits from most of an AI models life when its consuming relatively little energy, delivering the kinds of efficiencies users described above.In an analysis citing dozens of sources, a recent study debunked as misleading and inflated projections based on training models.Just as the early predictions about the energy footprints of e-commerce and video streaming ultimately proved to be exaggerated, so too will those estimates about AI likely be wrong, said the report from the Information Technology and Innovation Foundation (ITIF), a Washington-based think tank.The report notes as much as 90% of the cost and all the efficiency gains of running an AI model are in deploying it in applications after its trained.Given the enormous opportunities to use AI to benefit the economy and society including transitioning to a low-carbon future it is imperative that policymakers and the media do a better job of vetting the claims they entertain about AIs environmental impact, said the reports author, who described his findings in a recent podcast.Others Cite AIs Energy BenefitsPolicy analysts from the R Street Institute, also in Washington, D.C., agreed.Rather than a pause, policymakers need to help realize the potential for gains from AI, the group wrote in a 1,200-word article.Accelerated computing and the rise of AI hold great promise for the future, with significant societal benefits in terms of economic growth and social welfare, it said, citing demonstrated benefits of AI in drug discovery, banking, stock trading and insurance.AI can make the electric grid, manufacturing and transportation sectors more efficient, it added.AI Supports Sustainability EffortsThe reports also cited the potential of accelerated AI to fight climate change and promote sustainability.AI can enhance the accuracy of weather modeling to improve public safety as well as generate more accurate predictions of crop yields. The power of AI can also contribute to developing more precise climate models, R Street said.The Lisbon report added that AI plays a crucial role in the innovation needed to address climate change for work such as discovering more efficient battery materials.How AI Can Help the EnvironmentITIF called on governments to adopt AI as a tool in efforts to decarbonize their operations.Public and private organizations are already applying NVIDIA AI to protect coral reefs, improve tracking of wildfires and extreme weather, and enhance sustainable agriculture.For its part, NVIDIA is working with hundreds of startups employing AI to address climate issues. NVIDIA also announced plans for Earth-2, expected to be the worlds most powerful AI supercomputer dedicated to climate science.Enhancing Energy Efficiency Across the StackSince its founding in 1993, NVIDIA has worked on energy efficiency across all its products GPUs, CPUs, DPUs, networks, systems and software, as well as platforms such as Omniverse.In AI, the brunt of an AI models life is in inference, delivering insights that help users achieve new efficiencies. The NVIDIA GB200 Grace Blackwell Superchip has demonstrated 25x energy efficiency over the prior NVIDIA Hopper GPU generation in AI inference.Over the last eight years, NVIDIA GPUs have advanced a whopping 45,000x in their energy efficiency running large language models (see chart below).Recent innovations in software include TensorRT-LLM. It can help GPUs reduce 3x the energy consumption of LLM inference.Heres an eye-popping stat: If the efficiency of cars improved as much as NVIDIA has advanced the efficiency of AI on its accelerated computing platform, cars would get 280,000 miles per gallon. That means you could drive to the moon on less than a gallon of gas.The analysis applies to the fuel efficiency of cars NVIDIAs whopping 10,000x efficiency gain in AI training and inference from 2016 to 2025 (see chart below).How the big AI efficiency leap from the NVIDIA P100 GPU to the NVIDIA Grace Blackwell compares to car fuel-efficiency gains.Driving Data Center EfficiencyNVIDIA delivers many optimizations through system-level innovations. For example, NVIDIA BlueField-3 DPUs can reduce power consumption up to 30% by offloading essential data center networking and infrastructure functions from less efficient CPUs.Last year, NVIDIA received a $5 million grant from the U.S. Department of Energy the largest of 15 grants from a pool of more than 100 applications to design a new liquid-cooling technology for data centers. It will run 20% more efficiently than todays air-cooled approaches and has a smaller carbon footprint.These are just some of the ways NVIDIA contributes to the energy efficiency of data centers.Data centers are among the most efficient users of energy and one of the largest consumers of renewable energy.The ITIF report notes that between 2010 and 2018, global data centers experienced a 550% increase in compute instances and a 2,400% increase in storage capacity, but only a 6% increase in energy use, thanks to improvements across hardware and software.NVIDIA continues to drive energy efficiency for accelerated AI, helping users in science, government and industry accelerate their journeys toward sustainable computing.Try NVIDIAs energy-efficiency calculator to find ways to improve energy efficiency. And check out NVIDIAs sustainable computing site and corporate sustainability report for more information.0 Comments ·0 Shares ·167 Views
-
Byte-Sized Courses: NVIDIA Offers Self-Paced Career Development in AI and Data Scienceblogs.nvidia.comAI has seen unprecedented growth spurring the need for new training and education resources for students and industry professionals.NVIDIAs latest on-demand webinar, Essential Training and Tips to Accelerate Your Career in AI, featured a panel discussion with industry experts on fostering career growth and learning in AI and other advanced technologies.Over 1,800 attendees gained insights on how to kick-start their careers and use NVIDIAs technologies and resources to accelerate their professional development.Opportunities in AIAIs impact is touching nearly every industry, presenting new career opportunities for professionals of all backgrounds.Lauren Silveira, a university recruiting program manager at NVIDIA, challenged attendees to take their unique education and experience and apply it in the AI field.You dont have to work directly in AI to impact the industry, said Silveira. I knew I wouldnt be a doctor or engineer that wasnt in my career path but I could create opportunities for those that wanted to pursue those dreams.Kevin McFall, a principal instructor for the NVIDIA Deep Learning Institute, offered some advice for those looking to navigate a career in AI and advanced technologies but finding themselves overwhelmed or unsure of where to start.Dont try to do it all by yourself, he said. Dont get focused on building everything from scratch the best skill that you can have is being able to take pieces of code or inspiration from different resources and plug them together to make a whole.A main takeaway from the panelists was that students and industry professionals can significantly enhance their capabilities by leveraging tools and resources in addition to their networks.Every individual can access a variety of free software development kits, community resources and specialized courses in areas like robotics, CUDA and OpenUSD through the NVIDIA Developer Program. Additionally, they can kick off projects with the CUDA code sample library and explore specialized guides such as A Simple Guide to Deploying Generative AI With NVIDIA NIM.Spinning a NetworkStaying up to date on the rapidly expanding technology industry involves more than just keeping up with the latest education and certifications.Sabrina Koumoin, a senior software engineer at NVIDIA, spoke on the importance of networking. She believes people can find like-minded peers and mentors to gain inspiration from by sharing their personal learning journeys or projects on social platforms like LinkedIn.A self-taught coder, Koumoin also advocates for active engagement and education accessibility. Outside of work, she hosted multiple coding bootcamps for people looking to break into tech.Its a way to show that learning technical skills can be engaging, not intimidating, she said.David Ajoku, founder and CEO at Demystifyd and Aware.ai, also emphasized the importance of using LinkedIn to build connections, demonstrate key accomplishments and show passion.He outlined a three-step strategy to enhance your LinkedIn presence, designed to help you stand out, gain deeper insights into your preferred companies and boldly share your aspirations and interests:Think about a company youd like to work for and what draws you to it.Research thoroughly, focusing on its main activities, mission and goals.Be bold create a series of posts informing your network about your career journey and what advancements interest you in the chosen company.One attendee asked about how AI might evolve over the next decade and what skills professionals should focus on to stay relevant. Louis Stewart, head of strategic initiatives at NVIDIA, replied that crafting a personal narrative and growth journey is just as important as ensuring certifications and skills are up to date.Be intentional and purposeful have an end in mind, he said. Thats how you connect with future potential companies and people its a skill you have to develop to stay ahead.Deep Dive Into LearningNVIDIA offers a variety of programs and resources to equip the next generation of AI professionals with the skills and training needed to excel in a career in AI.NVIDIAs AI Learning Essentials is designed to give individuals the knowledge, skills and certifications they need to be prepared for the workforce and the fast moving field of AI. It includes free access to self-paced introductory courses and webinars on topics such as generative AI, retrieval-augmented generation (RAG) and CUDA.The NVIDIA Deep Learning Institute (DLI) provides a diverse range of resources, including learning materials, self-paced and live trainings, and educator programs spanning AI, accelerated computing and data science, graphics simulation and more. They also offer technical workshops for students currently enrolled in universities.DLI provides comprehensive training for generative AI, RAG, NVIDIA NIM inference microservices and large language models. Offerings also include certifications for generative AI LLMs and generative AI multimodal that help learners showcase their expertise and stand out from the crowd.Get started with AI Learning Essentials, the NVIDIA Deep Learning Institute and on-demand resources.0 Comments ·0 Shares ·170 Views
-
Magnetic Marvels: NVIDIAs Supercomputers Spin a Quantum Taleblogs.nvidia.comResearch published earlier this month in the science journal Nature used NVIDIA-powered supercomputers to validate a pathway toward the commercialization of quantum computing.The research, led by Nobel laureate Giorgio Parisi and Massimo Bernaschi, director of technology at the National Research Council of Italy and a CUDA Fellow, focuses on quantum annealing, a method that may one day tackle complex optimization problems that are extraordinarily challenging to conventional computers.To conduct their research, the team utilized 2 million GPU computing hours at the Leonardo facility (Cineca, in Bologna, Italy), nearly 160,000 GPU computing hours on the Meluxina-GPU cluster, in Luxembourg, and 10,000 GPU hours from the Spanish Supercomputing Network. Additionally, they accessed the Dariah cluster, in Lecce, Italy.They used these state-of-the-art resources to simulate the behavior of a certain kind of quantum computing system known as a quantum annealer.Quantum computers fundamentally rethink how information is computed to enable entirely new solutions.Unlike classical computers, which process information in binary 0s and 1s quantum computers use quantum bits or qubits that can allow information to be processed in entirely new ways.Quantum annealers are a special type of quantum computer that, though not universally useful, may have advantages for solving certain types of optimization problems.The paper, The Quantum Transition of the Two-Dimensional Ising Spin Glass, represents a significant step in understanding the phase transition a change in the properties of a quantum system of Ising spin glass, a disordered magnetic material in a two-dimensional plane, a critical problem in computational physics.The paper addresses the problem of how the properties of magnetic particles arranged in a two-dimensional plane can abruptly change their behavior.The study also shows how GPU-powered systems play a key role in developing approaches to quantum computing.GPU-accelerated simulations allow researchers to understand the complex systems behavior in developing quantum computers, illuminating the most promising paths forward.Quantum annealers, like the systems developed by the pioneering quantum computing company D-Wave, operate by methodically decreasing a magnetic field that is applied to a set of magnetically susceptible particles.When strong enough, the applied field will act to align the magnetic orientation of the particles similar to how iron filings will uniformly stand to attention near a bar magnet.If the strength of the field is varied slowly enough, the magnetic particles will arrange themselves to minimize the energy of the final arrangement.Finding this stable, minimum-energy state is crucial in a particularly complex and disordered magnetic system known as a spin glass since quantum annealers can encode certain kinds of problems into the spin glasss minimum-energy configuration.Finding the stable arrangement of the spin glass then solves the problem.Understanding these systems helps scientists develop better algorithms for solving difficult problems by mimicking how nature deals with complexity and disorder.Thats crucial for advancing quantum annealing and its applications in solving extremely difficult computational problems that currently have no known efficient solution problems that are pervasive in fields ranging from logistics to cryptography.Unlike gate-model quantum computers, which operate by applying a sequence of quantum gates, quantum annealers allow a quantum system to evolve freely in time.This is not a universal computer a device capable of performing any computation given sufficient time and resources but may have advantages for solving particular sets of optimization problems in application areas such as vehicle routing, portfolio optimization and protein folding.Through extensive simulations performed on NVIDIA GPUs, the researchers learned how key parameters of the spin glasses making up quantum annealers change during their operation, allowing a better understanding of how to use these systems to achieve a quantum speedup on important problems.Much of the work for this groundbreaking paper was first presented at NVIDIAs GTC 2024 technology conference. Read the full paper and learn more about NVIDIAs work in quantum computing.0 Comments ·0 Shares ·159 Views
-
Mistral AI and NVIDIA Unveil Mistral NeMo 12B, a Cutting-Edge Enterprise AI Modelblogs.nvidia.comMistral AI and NVIDIA today released a new state-of-the-art language model, Mistral NeMo 12B, that developers can easily customize and deploy for enterprise applications supporting chatbots, multilingual tasks, coding and summarization.By combining Mistral AIs expertise in training data with NVIDIAs optimized hardware and software ecosystem, the Mistral NeMo model offers high performance for diverse applications.We are fortunate to collaborate with the NVIDIA team, leveraging their top-tier hardware and software, said Guillaume Lample, cofounder and chief scientist of Mistral AI. Together, we have developed a model with unprecedented accuracy, flexibility, high-efficiency and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment.Mistral NeMo was trained on the NVIDIA DGX Cloud AI platform, which offers dedicated, scalable access to the latest NVIDIA architecture.NVIDIA TensorRT-LLM for accelerated inference performance on large language models and the NVIDIA NeMo development platform for building custom generative AI models were also used to advance and optimize the process.This collaboration underscores NVIDIAs commitment to supporting the model-builder ecosystem.Delivering Unprecedented Accuracy, Flexibility and EfficiencyExcelling in multi-turn conversations, math, common sense reasoning, world knowledge and coding, this enterprise-grade AI model delivers precise, reliable performance across diverse tasks.With a 128K context length, Mistral NeMo processes extensive and complex information more coherently and accurately, ensuring contextually relevant outputs.Released under the Apache 2.0 license, which fosters innovation and supports the broader AI community, Mistral NeMo is a 12-billion-parameter model. Additionally, the model uses the FP8 data format for model inference, which reduces memory size and speeds deployment without any degradation to accuracy.That means the model learns tasks better and handles diverse scenarios more effectively, making it ideal for enterprise use cases.Mistral NeMo comes packaged as an NVIDIA NIM inference microservice, offering performance-optimized inference with NVIDIA TensorRT-LLM engines.This containerized format allows for easy deployment anywhere, providing enhanced flexibility for various applications.As a result, models can be deployed anywhere in minutes, rather than several days.NIM features enterprise-grade software thats part of NVIDIA AI Enterprise, with dedicated feature branches, rigorous validation processes, and enterprise-grade security and support.It includes comprehensive support, direct access to an NVIDIA AI expert and defined service-level agreements, delivering reliable and consistent performance.The open model license allows enterprises to integrate Mistral NeMo into commercial applications seamlessly.Designed to fit on the memory of a single NVIDIA L40S, NVIDIA GeForce RTX 4090 or NVIDIA RTX 4500 GPU, the Mistral NeMo NIM offers high efficiency, low compute cost, and enhanced security and privacy.Advanced Model Development and CustomizationThe combined expertise of Mistral AI and NVIDIA engineers has optimized training and inference for Mistral NeMo.Trained with Mistral AIs expertise, especially on multilinguality, code and multi-turn content, the model benefits from accelerated training on NVIDIAs full stack.Its designed for optimal performance, utilizing efficient model parallelism techniques, scalability and mixed precision with Megatron-LM.The model was trained using Megatron-LM, part of NVIDIA NeMo, with 3,072 H100 80GB Tensor Core GPUs on DGX Cloud, composed of NVIDIA AI architecture, including accelerated computing, network fabric and software to increase training efficiency.Availability and DeploymentWith the flexibility to run anywhere cloud, data center or RTX workstation Mistral NeMo is ready to revolutionize AI applications across various platforms.Experience Mistral NeMo as an NVIDIA NIM today via ai.nvidia.com, with a downloadable NIM coming soon.See notice regarding software product information.0 Comments ·0 Shares ·158 Views
-
Hot Deal, Cool Prices: GeForce NOW Summer Sale Offers Priority and Ultimate Memberships Half Offblogs.nvidia.comIts time for a sweet treat the GeForce NOW Summer Sale offers high-performance cloud gaming at half off for a limited time.And starting today, gamers can directly access supported PC games on GeForce NOW via Xbox.com game pages, enabling them to get into their favorite Xbox PC games even faster.It all comes with nine new games joining the cloud this week.We Halve a DealUnlock the power of cloud gaming with GeForce NOWs sizzling summer sale.Take advantage of a special new discount one-month and six-month GeForce NOW Priority or Ultimate memberships are now 50% off until Aug. 18. Its perfect for members wanting to level up their gaming experience or those looking to try GeForce NOW for the first time to access and stream an ever-growing library of over 1,900 games with top-notch performance.Priority members enjoy more benefits over free users, including faster access to gaming servers and gaming sessions of up to six hours. They can also stream beautifully ray-traced graphics across multiple devices with RTX ON for the most immersive experience in supported games.For those looking for top-notch performance, the Ultimate tier provides members with exclusive access to servers and the ability to stream at up to 4K resolution and 120 frames per second, or up to 240 fps even without upgraded hardware. Ultimate members get all the same benefits as GeForce RTX 40 series GPU owners, including NVIDIA DLSS 3 for the smoothest frame rates and NVIDIA Reflex for the lowest-latency streaming from the cloud.Strike while its hot this scorching summer sale ends soon.Path of the GoddessRinse and repeat.Capcoms latest release, Kunitsu-Gami: Path of the Goddess is a unique Japanese-inspired, single-player Kagura Action Strategy game.The game takes place on a mountain covered in defilement. During the day, purify the villages and prepare for sundown. During the night, protect the Maiden against the hordes of the Seethe. Repeat the day-and-night cycle until the mountain has been cleansed of defilement and peace has returned to the land.Walk the path of the goddess in the cloud with extended gaming sessions for Ultimate and Priority members. Ultimate members can also enjoy seeing supernatural and human worlds collide in ultrawide resolutions for an even more immersive experience.Slay New GamesHaving a holiday in Hinterberg.In Dungeons of Hinterberg from Microbird Games, play as Luisa, a burnt-out law trainee taking a break from her fast-paced corporate life. Explore the beautiful alpine village of Hinterberg armed with just a sword and a tourist guide, and uncover the magic hidden within its dungeons. Master magic, solve puzzles and slay monsters all from the cloud.Check out the list of new games this week:The Crust (New release on Steam, July 15)Gestalt: Steam & Cinder (New release on Steam, July 16)Nobody Wants to Die (New release on Steam, July 17)Dungeons of Hinterberg (New release on Steam and Xbox, available on PC Game Pass, July 18)Flintlock: The Siege of Dawn (New release on Steam and Xbox, available on PC Game Pass, July 18)Norland (New release on Steam, July 18)Kunitsu-Gami: Path of the Goddess (New release on Steam, July 19)Content Warning (Steam)Crime Boss: Rockay City (Steam)What are you planning to play this weekend? Let us know on X or in the comments below.Come sale away this summer NVIDIA GeForce NOW (@NVIDIAGFN) July 17, 20240 Comments ·0 Shares ·166 Views
-
0 Comments ·0 Shares ·285 Views
-
0 Comments ·0 Shares ·300 Views
-
TONAL SHIFT BRINGS A MORE CINEMATIC LOOK TO HALO SEASON 2www.vfxvoice.comBy TREVOR HOGGImages courtesy of Paramount+There is an influx of video game adaptations, with Paramount+ entering into the fray with the second season of Halo, where the United Nations Space Command battles an alliance of alien races determined to eliminate the human race known as the Covenant. Joining the military sci-fi series, based on the first-person shooter developed by Bungie Studios and 343 Industries, is Showrunner David Wiener, who provided a different narrative focus for the eight episodes.Concept art of Aleria, which is an impoverished Outer Colony located in the Elduros System.You would not believe how much attention was paid to the color of the visors. It poses many challenges. For starters, visors are reflective, so every close-up for us means painting out the camera crew. But at the same time, with the tint and coating that the visors got, quite often the actors couldnt wear the helmets with the visors on. later on in post we had to add them.Wojciech Zielinski, Visual Effects SupervisorAn iconic visual element is the gold-colored visor worn by the UNSC Marine Corps led by the Master Chief. You would not believe how much attention was paid to the color of the visors, states Visual Effects Supervisor Wojciech Zielinski. It poses many challenges. For starters, visors are reflective, so every close-up for us means painting out the camera crew. At the same time, with the tint and coating that the visors got, quite often the actors couldnt wear the helmets with the visors on. They are performing in different environments and lighting conditions, and quite often for safety the visors had to be removed; later on in post we had to add them.Concept art of the landing bay at Camp Currahee, which became an entirely CG build.A side view of a Corvette.The Flood Seed is teased at the end of Season 2.The Light Bridge located at Forerunner City.Atmospherics such as smoke and mist play a heavy role in battle sequences, which means that there has to be tight relationship between stunts, special effects and visual effects. A big proponent of the show is having practical elements as much as possible; however, there is a fine line, Zielinski notes. We always have to figure out what is the right level for the visual effects. We prefer to add more atmosphere or smoke in post. I like to have a base level that we match into; it was quite an organic process. It is tricky to maintain the same level throughout the shots, so later on we had to make sure that the level was consistent in post. The density of smoke heightened the dramatic tension. Quite often, our goal was to have the Jackals [a grouping of the Kig-Yar race] covered in smoke to add more character and something lurking in the smoke. Sometimes, the story literally calls for just a silhouette, but still we believe that the audience will understand, see and read the intention from the storyline. The Jackals make use of lethal energy shields. The shields almost look the same as the ones from Season 1. This is something that we didnt want to play with in Season 2. The look of the shields is close to what they look like in the game. We want to make sure that the gamers appreciate how much has been retained from the game, Zielinski remarks.Conceptualizing the look of a Sanctuary village.A big proponent of show is having practical elements as much as possible; however, there is a fine line. We always have to figure out what is the right level for the visual effects. We prefer to add more atmosphere or smoke in post. I like to have a base level that we match into; it was quite an organic process. It is tricky to maintain the same level throughout the shots, so later on we had to make sure that the level was consistent in post.Wojciech Zielinski, Visual Effects SupervisorSeason 2 commences six months after the narrative of Season 1. You wouldnt see that much of a technological change; however, because of the approach of new Showrunner David Wiener, Season 2 had a slight tonal shift, Zielinski notes. The story is more grounded and centered around the characters. You want to make sure that all of the assets that we inherited from Season 1 will work with what David wanted to achieve. Therefore, we did play with the look development of the creatures. Along with the Jackals are the parasitic alien lifeforms known as the Flood. It is a tease because this is a big character in the game, but literally towards the end of Season 2 we are introducing the Flood, which will hopefully be further developed in Season 3. Its like a fungus that takes over your body and transforms into a combat form. We sourced the design of the Flood from the game, then modified it a little bit to make it feel more alien and organic, Zielinski describes.A key visual element are the virtual companions.The undercity of Reach City, which becomes the target of a Covenant attack.Significantly involved with the production of Halo is Microsoft under the leadership of Kiki Wolfkill, who serves as an Executive Producer, and 343 Industries, which provided game references. In general, all of the environments and creatures were either conceptualized by the Production Designer [James Foster] or the visual effects team, Zielinski states. Because of the new tonal style of the show, were trying to employ a more cinematic look. It is important that every time were with the actors, certain modern and dynamic camera moves that are in the games never end up being utilized because they dont fit the aesthetic of the show. A look book was created by the art department. For every visual effect, we created our own concept art as well. In addition, all of those dynamic sequences were previs, and creature battles were rehearsed by the stunt team and directors before we got even to the camera. Fight choreography was assisted by stuntvis. Its a collaboration between the director, stunt coordinator, whoever at the same time was the second unit director, and visual effects department. There are so many moving pieces, but it all starts with the story. Its a heavy collaboration within a limited time, Zielinski states.A different approach was utilized for Cortana, which relied upon plate photography to get the desired interaction and effect.The [Jackals lethal energy] shields almost look the same as the ones from Season 1. This is something that we didnt want to play with in Season 2. The look of the shields is close to what they look like in the game. We want to make sure that the gamers appreciate how much has been retained from the game.Wojciech Zielinski, Visual Effects SupervisorMore than 2,600 visual effects shots were created over a period of 14 months for the eight episodes. We used some of the vendors working on Season 1, but at the same time we had to find a few others with a skillset that works well for the requirements of the sequence, Zielinski observes. Ultimately, we had 18 vendors, and that kept us busy! The heavy lifters were Luma Pictures, El Ranchito, Monkey Rave, Image Engine, Cbica VFX and Rocket Science VFX. We had a few sequences where we had shared shots, which required a big coordination between the different vendors. Episode 204, which has the Battle of the Reach, is the most action-packed. We follow our heroes battling the Covenant forces attacking Reach City and trying to join the UNSC troops. What is interesting about the sequence are the so-called oners. We had multiple shots that are seamlessly stitched. Those had to be well-rehearsed and prepared to make sure that all of the stitches are working well. Sometimes, the story requires a complex stitch, meaning that we have to make sure that the end of one take is matching as close as possible to the beginning of the other take. It literally took us multiple takes to find the right stitch points.An iconic element are the golden visors that at times had to be replaced digitally.Atmospherics had an important role in making the Jackals even more menacing adversaries during the fight sequences.An effort was made to have partial sets built so that the actors could get the proper environmental interaction.Part of the responsibility of the visual effects team was designing and executing the holographic UI for the spacecrafts.World-building consisted of hybrid and entirely CG environments with 75% of Aleria being a practical set while Camp Currahee was fully digital. The Sanctuary sequence was shot in Iceland on a bluescreen stage, Zielinski explains. Were trying to source as much from the Iceland location and capture as much data as well, knowing later on in post we will be building or augmenting that environment. As far as the Reach City goes, we created CG assets for all of the establishing shots. The asset had to be rebuilt from the ground up to accommodate the destruction of the city in Season 2. It all comes down to overplanning the design of the sequences so we actually know which part of the city will be destroyed and by what means; that makes our life much easier in post and ensures that we focus on the right parts of the asset.A significant number of digital doubles had to be produced. It was quite heavy for digital doubles, especially in the action moments in which we see the Spartans [mechanically and biologically-enhanced elite soldiers] moving fast, Zielinski reveals. It was quite difficult for the actors and stunt team to perform those actions in those heavy suits. Quite often, when you see Spartans running, jumping and climbing mountains, those are digital doubles. With the crowd duplication, we cant always afford having 3,000 extras, so you design the scene or shot with a limited number of extras. You place them close to the camera then populate either the midground or background to set the right number for the shot.More than 2,600 visual effects shots were created by 18 vendors over a period of 14 months for the eight episodes of Season 2.[The Flood] is a big character in the game, but literally towards the end of Season 2 we are introducing the Flood, which will hopefully be further developed in Season 3. Its like a fungus that takes over your body and transforms into a combat form. We sourced the design of the Flood from the game, then modified it a little bit to make it feel more alien and organic.Wojciech Zielinski, Visual Effects SupervisorThen there were the virtual companions. The virtual companion employed the same technology as we did this season for Cortana, Zielinski states. We decided to change the approach from Season 1. In Season 1 Cortana was a full CG character. This time, we applied a hybrid approach; in our case, the character was based on plate photography. We shot an actress interacting or performing with other actors. From the plate, we used the face of the actress, then augmented that with a digital body and CG hair. For Cortana shots, we also built a special LED rig, which was integrated into her practical costume that gave us reactive light on the actor, other actors and the sets.Given the limited number of extras, digital crowd replication was critical in getting the necessary size and scope for shots.The mandate was to go for a more grounded tone for the series.Skies play a pivotal role in establishing the proper visual aesthetic for shots.Bluescreen stages were mixed with real locations to create the various environments required for Season 2 of Halo.There were times when full CG shots were required.A staple of sci-fi are space battles. It was quite exciting for all of us, Zielinski acknowledges. We were happy to finally see some space action. As you can imagine, space battles and space jumps are 99% a full CG creation. For the space battles, we had to redesign all of the fleets of the UNSC and Covenant forces. It has been quite a long process to figure out the choreography and geography of the space battles. On the screen it may seem straightforward, but actually its quite complex. I wanted to make sure that the audience is not disorientated and understands the flow of the story in those shots.0 Comments ·0 Shares ·317 Views
-
CHECKING INTO HAZBIN HOTEL TO CHECK OUT THE ANIMATIONwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Prime Video and A24.Collaborating with a group of freelance animators and aided by financial support provided by Patreon, American animator, writer, director and producer Vivienne Medrano released a pilot episode for Hazbin Hotel via her VivziePop YouTube Channel, which revolves around Charlie Morningstar, the Princess of Hell, setting up a rehabilitation establishment for demons to avoid the yearly extermination imposed by Heaven. Contributing to the 109 million views over the past four years was A24, the independent entertainment company responsible for the Oscar-winning Everything Everywhere All at Once, which in turn got Amazon MGM Studios interested to produce a new pilot and seven more episodes to stream on Prime Video.A challenging aspect of getting Vaggie to emote properly is the X placed over the left eye.Hazbin Hotel is different in the sense that it came from a proof concept that went viral and also had the benefit of a company like A24 that is risk-taking. Its definitely not easy to put into a box that exists in the adult animation world, and Im excited because the adult animation world is starting to bloom into something different, and were seeing more diversity in the shows.Vivienne Medrano, Creator, Executive Producer, Showrunner and Writer, Hazbin HotelI had spent most of my career as a freelancer, so I havent experienced the nitty-gritty of productions, states Vivienne Medrano, Creator, Executive Producer, Showrunner and Writer of Hazbin Hotel. Ive mostly done visual development and things that never got to see the light of day. Hazbin Hotel is different in the sense that it came from a proof concept that went viral and also had the benefit of a company like A24 that is risk-taking. Its definitely not easy to put into a box that exists in the adult animation world, and Im excited because the adult animation world is starting to bloom into something different, and were seeing more diversity in the shows. Putting together a new pilot was complicated. We only had eight episodes and a tight time to tell the story. I didnt want to waste an episode on redoing the original pilot. We had to get the same information across, re-establish the world and characters, but also tell a new story with new characters and villains, Medrano says.Catering to Creator/Showrunner Vivienne Medranos attachment to zanier, sillier characters is Niffty.How Bruce Timm draws women, like Harlequin and Poison Ivy, the exaggerated shape language of Tim Burton, cartoonist Jhonen Vasquez and classic Warner Bros. Animation and Disney were major influences. It was fun to explore how to find an identity for the show and the style, Medrano remarks. The other aspect of my style is that its detailed with a lot of stripes; thats a Tim Burtonism! I like specific outfits and accessories. Its catered to my sensibilities. Angel Dust was simplified, while a different idea for Charlie was introduced, especially with her braids. Because its a hand-drawn show, I wanted to take off some of the superfluous details that were in the [original] pilot. However, I wanted to maintain those iconic, striking designs. Also, there were changes I always wanted to make to begin with from [original] pilot but we were too far in to do that. Black and red dominate the color palette. The biggest challenge of the show for my art director, Sam Miller, is that its a lot of red characters on numerous red backgrounds. It was a challenge to find the right balance and the tones of the reds that we use. Im trying to lean more into contrast in the second season, Medrano explains.Given that Vox is, in essence, a TV screen, side profiles of his head were avoided.Concept art for a billboard promoting politeness.A Morningstar Family portrait painting.I would red-line the change that I want; thats usually the easiest because they can take that and finalize it. I do that with character designs and ever so often with backgrounds. For storyboards, I also direct, so I do thumbnails. I use a Wacom Cintiq tablet where you can directly draw on to. I have a bigger one that Ive worked on since college and a smaller version that I use often. You can have it on your lap and work. Its much more mobile.Vivienne Medrano, Creator, Executive Producer, Showrunner and Writer, Hazbin HotelRevisions are articulated by drawing over the work of artists. I would red-line the change that I want; thats usually the easiest because they can take that and finalize it, Medrano states. I do that with character designs and ever so often with backgrounds. For storyboards, I also direct, so I do thumbnails. I use a Wacom Cintiq tablet where you can directly draw on to. I have a bigger one that Ive worked on since college and a smaller version that I use often. You can have it on your lap and work. Its much more mobile. Adult animation has provided more of an opportunity to blend 2D and 3D techniques. The combination of 3D and 2D is cool. A great example is Arcane where they could have the 3D style, but it still has an artistic layering of texture. However, all of the effects are done in 2D, like fire and smoke. 2D effects are the coolest looking. For Hazbin Hotel, we dont utilize a ton of 3D because its outside the process and pipeline of being a 2D show. On my other series, Helluva Boss, we utilized 3D a couple of times, and its fun to stylize the 3D to match the 2D world.Logo designed for 666 News.Collaborating on her third project with Medrano is Skye Henwood. Its an across-the-ocean story of an online relationship working out, states Skye Henwood, Animation Director at SpindleHorse Toons. I needed a job, she had a job opening, and I joined her studio. We ended up in a studio chatroom, and it was like an instant click. Now, its an Amazon show, and everybody knows the name. Its crazy! SpindleHorse Toons animated Episode 108 while Princess Bento was responsible for the rest of the series. Its all done digitally. Even the storyboards. We use Wacoms and work in Toon Boom Harmony. Ive made like a million guides on how to do a basic shot. The stylization is easier in a way because the policy that Viv, Sam Miller, and I have is, as long as it looks cool or appealing, thats it. It doesnt necessarily have to look like the model sheet. You can stretch Charlie, make her eyes big, and if its cute or funny enough its getting in. There is a lot of creative freedom. This show is for artists by artists, Henwood states.A poster of Charlie with her showman father Lucifer performing in the background.A color key for the Pentious Airship.A color concept for the rebuild of the Hazbin Hotel.Its all done digitally. Even the storyboards. We use Wacoms and work in Toon Boom Harmony. Ive made like a million guides on how to do a basic shot. The stylization is easier in a way because the policy that Viv, Sam Miller and I have is, as long as it looks cool or appealing, thats it. It doesnt necessarily have to look like the model sheet. You can stretch Charlie, make her eyes big, and if its cute or funny enough its getting in. There is a lot of creative freedom. This show is for artists by artists. Skye Henwood, Animation Director, SpindleHorse ToonsStoryboards and animatics are an essential part of the creative process. The strength of the boards is the strength of the series, Medrano notes. Im excited because for the second season we have some incredible artists joining and have more action, so theres more of a uniqueness to the technique of the boards. The poses of the characters are figured out in the storyboards. We dont usually invent many new ones from the boards. Thankfully, for Season 1 we had some fantastic board artists, in particular for the song sequences. Then my studio [SpindleHorse Toons] was able to do a little of the animation plus the boards. Im glad that we got to do that because it heightened those moments. The songs are vital during the writing stage. The challenge for Season 1 was that I wanted the songs to be the length that they needed to be. Because we have a tight 22-minute run time, we had to work closely with the songwriters and figure out what part of the script was going to be music and what genre of music. What I love about musicals is that they go hand-in-hand with animation. The characters get to be really expressive, and the songs get to be bombastic and out there, Medrano observes.A pentagram city map of Hell.We have the music from the start, so we dont have to go back and change it, Henwood remarks. Dealing with a musical in animation is hard because musicals are beautiful and so out there and over-the-top, we have to get that Wow! feeling, and there are reasons for why things are done in live-action and animation. We have to compromise and find how we can fit the wonders of a musical into a 2D TV animation. We find a way to make the reason why the character sings believable. Its important to try to feel seamless when starting those musical moments. Henwood did the first shots animated for Season 1. I wanted to make it simple enough that any studio could do it, yet also capturing Vivs emotions and style of drawing, which is the hardest thing because we draw every frame ourselves, he says.Deciding upon the color palette for the Gates of Heaven.One of the many key props that had to be designed was the Valentinos favorite gun.Maintaining the proper poses is more difficult for some characters compared to others. There is a spider demon and he has four arms, so when youre having someone cross their arms, what are his bottom arms doing? Henwood comments. You have to come up with that. If he is angry or sassy, maybe on his hips. You have to make sure that looks appealing. We dont want to hide his arms behind him. Its thought out. My favorite trivia is Vox. We have a rule that he cant be seen sideways because hes a flat-screen TV. Youll notice that even if Vox should turn around to leave a shot, his head will stay forward. The one who has changed the most and for the better is Angel Dust. In older iterations, he was more monstrous and had a poison skull on his chest. But as Viv was realizing this character, Angel Dust became this cutesy guy. My favorite part of all of the design changes is, right before we started animating it, Viv let me have my opinion on how to simplify them. Jeremy Jordan has made clever vocal choices for Lucifer Morningstar. Jeremy gets into that booth and does the silliest little takes we have ever heard. My favorite thing he does is what we call the Lucifer wheeze. In Episode 105, Jeremy is coughing the name Charlie. Its really fun because that wasnt in the script, Henwood reveals.A lot of detail went into not only characters but also the background elements.A significant visual challenge was having red characters placed against red backgrounds.The strength of the boards is the strength of the series. Im excited because for the second season we have some incredible artists joining and have more action, so theres more of a uniqueness to the technique of the boards.Vivienne Medrano, Creator, Executive Producer, Showrunner and Writer, Hazbin HotelThe world-building for Hazbin Hotel has been a natural evolution. Something that Ive started working on is a bible of all the information that has been established about this world and rules that need to be maintained, Medrano says. Its challenging because I have my other series, Helluva Boss, which is a different side of this expansive world. We have to make sure that there is a consistency because our audience will notice and care. Adding life to the world is the background action. Its easier for the animators and less distracting to have a silhouetted character in the background; that is something stylistically I do a lot, Medrano adds. One of the changes from the original pilot is having an entirely different voice cast. I have the characters figured out and need to find the right voice. I have specific voices in mind. With Hazbin Hotel, we had the pilot where we established some voices that the audience was attached to. It was important to maintain a sense of cohesion between the two casts. When it came to casting for the new series, it was a re-audition for the original cast and new people. The actors we ended up going with were good at maintaining that original sound and vibe of the characters that I wanted and was attached to, but also bringing with it the musicality and singing talent that was needed for the show. Once we had the final cast, everything locked together, and it was exactly what I had envisioned but I am picky about casting.A pastel color palette was devised for Heaven.A number of characters are exaggerated representations of Medrano. Angel Dust is nothing like me, but I put a lot personal trauma and experiences in him, Medrano states. Charlie is a lot like me in the sense that she is a driven and determined person in a world that can be hard and naysaying. There is part of me in Vaggie when it comes to being more practical and reserved; shes more of a worrier and feisty. Like a lot of creators, I try to put in a little piece of real me, but it also came from things that I feel add to the story or character tropes that I enjoy. Niffty and Lucifer are catered to my sensibilities because I like the zanier, sillier characters as well. Showcasing the most action and characters was Episode 108.It was important to seamlessly integrate the musical numbers into the narrative.There is no shortage of characters and action in the battle sequence that takes place in Episode 108.Jeremy Jordan makes clever vocal choices for Lucifer Morningstar, such as having him wheeze.The character designs were influenced by Bruce Timms portrayal of women and the exaggerated shapes of Tim Burton.A film that greatly influenced the animation of Hazbin Hotel was Cats Cant Dance.A character that needed to be constantly checked by Animation Director Skye Henwood was Vox.Niffty stars in A24s first venture into adult animation with Hazbin Hotel, which started off as a proof of concept on YouTube.There is a shot where the camera is following characters through the battle, and it was comprised of two different shots that seamlessly come together as one, Medrano remarks. We start with Angel Dust and Cherri Bomb, who jumps up and throws a bomb. Then the bomb explosion transitions to another character. It was a challenge in the sense that the hookup had to be specific. Theres a lot of characters. There is a giant shield that had effects on it, so we had to make sure that the background was tracking and the camera was working with it while the effect was going on. That was a technical shot, but it turned out fantastic.0 Comments ·0 Shares ·322 Views