• UE5 Layered Material Workflow Breakdown
    www.facebook.com
    UE5 Layered Material Workflow BreakdownDaniel Cormino demonstrates his workflow using layered materials in Unreal Engine 5https://adapt.one/editorial/link/188/UE5+Layered+Material+Workflow+Breakdown/
    0 Kommentare ·0 Anteile ·115 Ansichten
  • Disney Coming in 2025
    www.facebook.com
    Disney Coming in 2025A preview of what to expect from Star Wars, Pixar, Marvel and Disney in 2025.https://adapt.one/editorial/link/187/Disney+%E2%80%93+Coming+in+2025/
    0 Kommentare ·0 Anteile ·120 Ansichten
  • Scene Group releases Cavalry 2.2
    www.cgchannel.com
    Monday, November 11th, 2024Posted by Jim ThackerScene Group releases Cavalry 2.2html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"https://www.cgchannel.com/wp-content/uploads/2024/02/240206_Cavalry2_particles.mp4Originally posted on 6 February 2024. Scroll down for news of the Cavalry 2.2 update.Scene Group has begun the next big series of releases for Cavalry, its motion design software.Cavalry 2.0 adds animatable scene cameras, making it possible to create 2.5D effects, plus an experimental new particle system, and increases scene playback speed around 200%.A next-gen 2D motion graphics tool inspired by 3D softwareOriginally released in 2020, Cavalry is a procedural animation app combining the power and flexibility of 3D with the ease of use of 2D.Although currently a pure 2D animation tool, it supports workflows that will be familiar to 3D animators, including keyframing, curve editing, deformation, rigging, scattering and instancing.Scene Groups background is also in 3D motion graphics: the firm is a spin-off from Mainframe North, which developed MASH, Mayas motion graphics toolset.Once created, images may be exported in a range of file formats, including as JPEG, PNG or SVG sequences, as animated PNGs, as WEBM or QuickTime movies, or in Lottie format.https://www.cgchannel.com/wp-content/uploads/2024/02/240206_Cavalry2_cameras.mp4Add a Camera to a scene to create 2.5D animationsMajor changes in Cavalry 2.0 include support for Cameras, making it possible to create 2.5D effects like the one above.Users can create Freeform or Look At cameras, with the option to offset the position of the camera and look-at target to create secondary motion, and to set view distance limits for layers.Experimental new particle system creates 2D particle effectsCavalry 2.0 also introduces an experimental new particle system, for creating particle effects.Its still a tech preview, but it already includes a range of standard basic features, including settings for particle shape, and a range of emitter types and modifiers.Particles can be emitted from points, shapes, paths or Distributions; and it is possible to direct particle motion with paths, goals, forces or turbulence.Other new features and performance improvementsOther new features in Cavalry 2.0 include a new Auto-Animate behavior for animating Shapes with fewer keyframes, and support for tapered strokes along Shapes.Workflow improvements include the option to set up overrides for Pre-Comps, making it easier to create variants for a composition.Users can also now group Layers into simplified custom containers called Components, controlling which Attributes are exposed in the UI.Performance improvements include boosts of 10-600% in playback speed: the improvement is greater in complex scenes, but Scene Group says that the average is around 200%.Cavalry also now supports background rendering, making it possible to continue to work while a scene is rendering.https://www.cgchannel.com/wp-content/uploads/2024/02/240524_Cavalry21_tw.mp4Updated 23 May 2024: Scene Group has released Cavalry 2.1.The update focuses on the audio tools, adding support for multi-track audio playback, and the option to export audio from Cavalry.Audio projects can be exported as AAC files, or in MP4, QuickTime or WebM files. It is also possible to import audiofiles in more formats, now including AAC, MP3 and CAF.Updated 11 November 2024: Scene Group has released Cavalry 2.2.The biggest change in the update is support for OpenType fonts in the Text Shape, with the option to control OpenType features like ligatures and superscript procedurally.It is also possible to create color gradients along Strokes, and to add multiple Strokes to paths.Other changes include the option to fill closed paths with Stitches, new Sweep and Shape Falloff patterns, a new Quick Mask mode, and proportional easing when scaling keyframes.Users of the paid Pro edition also get a new Knot behavior, which automatically adds gaps to paths where they self-intersect, and a new Stroke Duplicator feature.Price and system requirementsCavalry 2.2 is available for Windows 10+ and macOS 12.0+. The full software is available rental-only, with subscriptions costing 192/year (around $245/year). The free edition caps renders at full HD resolution, and lacks the advanced features in this table.Read a full list of new features in Cavalry in the online release notesRead an overview of original Cavalry 2.0 update on Scene Groups blogHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.Latest NewsBlackmagic Design releases Fusion Studio 19.1Update to the 3D compositing app extends USD and OCIO workflow, and adds new options for toggling between inputs in compositions.Wednesday, November 13th, 2024Blackmagic Design releases DaVinci Resolve 19.1Discover the key features for VFX artists and colorists in the free grading and editing software and its $295 Studio edition.Wednesday, November 13th, 2024Get the Blender Studio's free Brushstroke Tools for BlenderNeat free extension lets you apply painterly brushstrokes to the surface of 3D models while texturing them, and control the strokes procedurally.Tuesday, November 12th, 2024Foundry releases Mari 7.1 in betaUpdated for Beta 2: 3D texture painting app gets new 2D painting mode for editing source images or creating custom decals.Tuesday, November 12th, 2024Scene Group releases Cavalry 2.2Next-gen motion design software gets support for OpenType fonts, plus the option to create color gradients along strokes.Monday, November 11th, 2024Maxon releases ZBrush for iPad 2025.1First update to the iPad edition of the digital sculpting software adds support for texture maps, UV layouts and displacement maps.Friday, November 8th, 2024More NewsFoundry discontinues ModoEpic Games to share new free assets on Fab every two weeksGet 100+ free modular airport assets for Unity and Unreal EngineReallusion adds AI Smart Search to Character Creator and iClonePoly Hammer unveils MetaHuman DNA add-on for BlenderAcademy Software Foundation adopts OpenAPVWRetargetTool is a versatile free Maya animation retargeting toolSketchsoft releases Feather 1.0Tutorial: Creating Cinematic Concept ArtGet ProductionCrate's three new free After Effects pluginsDownload VFX artist David Llopis's free Maya dragon rigChaos releases Enscape 4.2Older Posts
    0 Kommentare ·0 Anteile ·91 Ansichten
  • Voyager 2 Measured a Rare Anomaly When It Flew Past Uranus, Skewing Our Knowledge of the Planet for 40 Years, Study Suggests
    www.smithsonianmag.com
    In 1986, Voyager 2 took this image of Uranus during its flyby. NASA / JPL-CaltechIn 1986, when NASAs Voyager 2 flew by the mysterious Uranus, it gave scientists their first close-up peek into the solar systems seventh planet. The discoveries from that singular visit still provide much of astronomers modern understanding of the strange, ice giant world. But now, a new study reveals Uranus was experiencing a rare solar wind event at the time of the flyby, suggesting the understanding that came from the Voyager 2 visit may have been skewed.In a paper published on Monday in Nature Astronomy, researchers argue that if the spacecraft had arrived at Uranus just a few days earlier, it would have discovered something else.The spacecraft saw Uranus in conditions that only occur about 4 percent of the time, says Jamie Jasinski, a physicist at NASAs Jet Propulsion Laboratory and lead author of the study, in a statement from NASA.Those unusual conditions have to do with Uranus magnetospherea planets protective magnetic bubble that shields it from the solar wind. That 1986 visit encountered an empty magnetosphere around Uranus, oddly devoid of plasma. Astronomers concluded the planet was different compared to others in the solar system, but the new findings suggest its magnetosphere was just being squashed by a solar wind event that sent a stream of plasma and charged particles toward the planet.After traveling some 1.8 billion miles to reach Uranus 38 years ago, Voyager 2 gathered its data on the planet in less than six hours, discovering ten new moons and two rings alongside the void magnetosphere. This James Webb Space Telescope image of Uranus, released last December, shows nine of the planet's moons, which are named after characters in the works of William Shakespeare and Alexander Pope and often known as the "literary moons." NASA, ESA, CSA, STScIWhen Jasinski and his colleagues presented the new research this past summer, it was a surprise for Fran Bagenal, an astrophysicist at the University of Colorado, Boulder, who worked with the Voyager plasma science team, reports the New York Times Jonathan OCallaghan.Why didnt we see this? Bagenal tells the outlet. I was kicking myself. It was completely out of the blue.Jasinski had always wondered about the results of the flyby, because it provided only a small peek into the planet, he told the Washington Posts Rachel Pannett in an email. Jasinski has experience with missions that orbited planets and observed changes over much longer periods of time, which led him to believe the conclusions about Uranus may have been flawed.The extreme type of measurements Voyager 2 took always made me wonder if we just caught Uranus at a very specific moment in time, he tells the Washington Post.For scientists, learning more about magnetospheres helps reveal how different planets function. Using the knowledge from the 1986 flyby, astronomers had concluded that the missing plasma around Uranus also meant its moons were inactive.But the new research shows that might not be the case. If the missing plasma was indeed due to solar windwhich would have compressed the planets magnetic bubble and driven plasma outit allows for the possibility that Uranus five major moons might indeed be geologically active.The solar wind event might also have affected the planets radiation belts, regions with lots of energetic and charged particles, by infusing them with even more electrons. This would explain why Voyager 2s observations showed Uranus radiation belts as some of the most intense in our solar system, second only to Jupiter. An artist's conception of Uranus' usual magnetosphere (left) compared to how it behaved during the Voyager 2 flyby (right). NASA / JPL-CaltechLinda Spilker, a planetary scientist at NASA who was not involved in the new study, remembers being glued to the images from the 1986 flyby with anticipation and excitement. The flyby was packed with surprises, and we were searching for an explanation of its unusual behavior. The magnetosphere Voyager 2 measured was only a snapshot in time, she says in the statement.NASA might soon expand its knowledge about Uranus in a missionto the planet, marked as a priority by scientists as part of the most recent Planetary Science and Astrobiology Decadal Survey. They recommended that NASA put a spacecraft into orbit around the mysterious planet and release a probe into its atmosphere to better understand the solar systems origin and evolution.The Uranus system is one of the big blank spots that are left on our map, Francis Nimmo, a planetary scientist at the University of California, Santa Cruz, told Scientific Americans Shannon Hall last year.For now, this new work explains some of the apparent contradictions, from the Voyager 2 flyby, Spilker adds in the NASA statement, and it will change our view of Uranus once again.Get the latest stories in your inbox every weekday.Filed Under: Astronomy, Astrophysics, James Webb Space Telescope, NASA, New Research, Outer Space, Planets, Solar System, Sun, Uranus
    0 Kommentare ·0 Anteile ·89 Ansichten
  • You can now run the most powerful open source AI models locally on Mac M4 computers, thanks to Exo Labs
    venturebeat.com
    To further support adoption of local AI solutions, Exo Labs is preparing to launch a free benchmarking website next week.Read More
    0 Kommentare ·0 Anteile ·89 Ansichten
  • Microsoft interested in more M&A, Spencer says
    www.gamesindustry.biz
    Microsoft interested in more M&A, Spencer says"We're not going to grow the market with $1,000 consoles," Microsoft Gaming boss says as the firm looks at mobile News by Marie Dealessandri Deputy Editor Published on Nov. 13, 2024 Microsoft still has an eye on potential games-related acquisitions, particularly in mobile or in places that could add "geographic diversity" to Xbox's portfolio.That's according to a Phil Spencer interview with Bloomberg, with the Microsoft Gaming boss adding: "We definitely want to be in the market, and when we can find teams and technology and capability that add to what we're trying to do in gaming at Microsoft, absolutely we will keep our heads up."He reportedly added that there aren't any "imminent" deals though, and that large companies are likely not what the firm would be targeting as it's still in transition following the ABK acquisition.Spencer also said that Microsoft is in particular interested in China, especially following its successful launch of Age of Empires Mobile in partnership with Tencent."It's been a good area for us to learn from creative teams that have real unique capability," he said. "The real opportunity is to partner with creative teams in China for global."Still on the topic of mobile, Spencer said he's feeling "pretty good about where this industry is going," adding: "To reach new players, we need to be creative and adaptive of new business models, new devices, new ways of access. We're not going to grow the market with $1,000 consoles."He added that Xbox's mobile storefront, which was due to launch this summer, has been delayed to an unspecified date as the company further researches the market.Spencer also shared that Microsoft is looking at handheld and working on hardware prototypes on that front, though anything more concrete is "a few years out" for Xbox.He touched upon the topic of Xbox games on Sony and Nintendo's platforms as well, saying we should expect Microsoft to do more cross-platform launches of its IP going forward."I do not see sort of red lines in our portfolio that say 'thou must not'," he told Bloomberg.Microsoft recently released its financial results for Q1 of its fiscal year 2025, showing strong growth over its gaming segments following the Activision acquisition, though Xbox hardware revenue is down.
    0 Kommentare ·0 Anteile ·95 Ansichten
  • Pixel phones will be able to detect and report malicious apps in real time
    www.theverge.com
    Google/Tech/AndroidPixel phones will be able to detect and report malicious apps in real timePixel phones will be able to detect and report malicious apps in real time / The new Play Protect feature is available to Pixel phones now and rolling out to additional OEMs soon.By Allison Johnson, a reviewer with 10 years of experience writing about consumer tech. She has a special interest in mobile photography and telecom. Previously, she worked at DPReview. Nov 13, 2024, 9:13 PM UTCShare this storyGoogle versus the bad guys. Illustration: Alex Castro / The VergeGoogle is beefing up its malware detection with new protections designed to suss out ever-sneakier bad actors. Androids Google Play Protect service is getting an update called live threat detection which seeks out potentially harmful apps on your phone by analyzing app behavior and alerts you in realtime if something looks fishy. The update was first announced at Google I/O earlier this year and is available now to Pixel 6 and newer phones. It should come to additional non-Pixel Android phones from Lenovo, OnePlus, Nothing, and Oppo, among others in the coming months.Live threat detection targets particularly hard-to-spot malware apps that hide their intentions well. Rather than just scanning apps for malicious code when you download them, Play Protect will keep looking for signs of suspicious app behavior even after theyre on your phone. This can help it spot malware that remains dormant at first and later starts engaging in malicious activity. This detection takes place on-device using an Android privacy infrastructure called Private Compute Core to help keep user data secure, and users will get real-time alerts to take action if needed. Google is rolling out another security feature today, too: scam call detection. Also announced at I/O, this feature uses on-device AI to analyze phone calls and looks for signs that that caller is a scammer. If it spots suspicious conversational patterns or requests typical of scam attempts, it will flag the user and encourage them to end the call. Its only available to members of the Phone by Google apps beta program with a Pixel 6 or later (as of this morning that program appears to be full) and will roll out to more Android phones in the future.Most PopularMost Popular
    0 Kommentare ·0 Anteile ·86 Ansichten
  • The Wall Street Journal is testing AI article summaries
    www.theverge.com
    The Wall Street Journal is experimenting with AI-generated article summaries that appear at the top of its news stories. The summaries appear as a Key Points box with bullets summarizing the piece. The Verge spotted the test on a story about Trumps plans for the Department of Education, and the Journal confirmed its trialing the feature to see how readers respond.The Key Points box has a message explaining that an artificial intelligence tool created this summary and that the summary was checked by an editor. The box also points to a page about how the WSJ and Dow Jones Newswires use AI tools. The AI-generated Key Points from this WSJ article. Screenshot by Jay Peters / The VergeWe are always assessing new technologies and methods of storytelling to provide more value to our subscribers, Taneth Evans, head of digital at the WSJ, says in a statement to The Verge. To that end, we are currently running a series of A/B tests to understand our users needs with regards to summarization. The newsroom does this hand-in-hand with colleagues in technology and while speaking with readers at every step of the way. We also disclose how we leverage artificial intelligence tools to support our journalism whenever its used.AI summaries have been spreading across news sites and platforms. USA Today owner Gannett has also experimented with adding AI-generated summaries to its articles its even using a similar Key Points format. Apps like Particle summarize articles using AI, too. Personally, Id recommend reading full articles when you can in case the AI tool hallucinates something thats incorrect.
    0 Kommentare ·0 Anteile ·87 Ansichten
  • Meta AI Researchers Introduce Mixture-of-Transformers (MoT): A Sparse Multi-Modal Transformer Architecture that Significantly Reduces Pretraining Computational Costs
    www.marktechpost.com
    Advancements in AI have paved the way for multi-modal foundation models that simultaneously process text, images, and speech under a unified framework. These models can potentially transform various applications, from content creation to seamless translation across media types, as they enable the generation and interpretation of complex data. However, achieving this requires immense computational resources, which creates a barrier to scaling and operational efficiency. Training these multi-modal systems is complex, as each modality, whether text, image, or audio, introduces unique challenges, requiring customized handling while maintaining cohesion within the models framework. Balancing this level of diversity in data types has proven difficult regarding both processing power and training efficiency.A primary issue faced in multi-modal AI research is that traditional language models are optimized for text, and extending them to incorporate images and audio requires substantial computational power. Large language models, or LLMs, designed specifically for text-based tasks do not naturally integrate other modalities due to the inherent differences in how each modality needs to be processed. For instance, a text model optimized on trillions of tokens can only extend to image and speech data with conflicts in the training dynamics. Consequently, the computational load escalates, with these models requiring up to five times the data and processing power compared to text-only models. Researchers, therefore, aim to find architectures that can accommodate these requirements without a proportional increase in resources.Various strategies currently address this need for computational efficiency in multi-modal models. One prominent approach is using sparse architectures, such as Mixture-of-Experts (MoE), which activates only specific parts of the model as needed. MoE operates by utilizing experts to manage different aspects of the data, reducing the workload of the model at any given moment. However, MoE has limitations, including instability caused by unbalanced expert utilization and difficulty managing training dynamics at scale. Furthermore, MoEs routing mechanism tends to focus on specific aspects of the data, often leading to an imbalance in training different modalities, thus requiring additional techniques to stabilize the process and maintain efficiency.FAIR at Meta and Stanford University researchers introduced a new architecture called Mixture-of-Transformers (MoT). The MoT, built as a sparse, multi-modal transformer, reduces computational demands by incorporating modality-specific parameters. Unlike traditional dense models that rely on uniform processing, MoT utilizes distinct components for each modality, text, image, and speech, allowing for modality-specific optimization without requiring additional model components. For example, MoT assigns unique feed-forward networks, attention matrices, and normalization layers to each modality while maintaining a unified attention mechanism across the entire input data sequence, enhancing processing efficiency and output accuracy.The Mixture-of-Transformers framework leverages this sparse design by decoupling the model parameters according to modality, optimizing training and inference phases. For instance, MoT separates text, image, and speech parameters during a multi-modal task, applying customized processing layers for each. This process reduces the need for dense model layers to accommodate all modalities simultaneously. As a result, MoT achieves a balance of efficiency and effectiveness that traditional dense models lack. For instance, in tests involving text and image generation within the Chameleon 7B model, MoT delivered comparable results to dense baselines with only 55.8% of the FLOPs and even less 37.2% when integrating a third modality, such as speech. This efficiency gain translates to significant reductions in resource usage, which, in large-scale AI models, can lead to major cost savings.Mixture-of-Transformers showed notable improvements across multiple evaluation criteria. Compared to dense transformer models, the architecture reduced pretraining times for text and image tasks by over 40%. In the Chameleon setting, where the model processes text and images using autoregressive objectives, MoT reached the dense models final validation loss using just 55.8% of the computational power. Furthermore, MoT accelerated the training process by achieving the same levels of accuracy in image quality with 47.2% of the time required by dense models, and it achieved text quality in 75.6% of the typical time. Such efficiency gains were further confirmed in the Transfusion setting. MoT matched dense baseline image performance while using only one-third of the FLOPs, proving its adaptability and resource efficiency in handling complex multi-modal data.The research offers several key takeaways, highlighting the potential of Mixture-of-Transformers to redefine multi-modal AI processing:Efficient Multi-Modal Processing: MoT matches dense model performance across text, image, and speech, achieving results with 37.2% to 55.8% of the computational resources.Training Acceleration: In the Chameleon model, MoT reduced training time for image tasks by 52.8% and text tasks by 24.4% while maintaining accuracy.Adaptive Scalability: MoT demonstrated high adaptability by effectively handling discrete and continuous tokens for multiple modalities without additional processing layers.Resource Reduction in Real-Time Use: Performance evaluations on NVIDIA A100 GPUs showed MoT significantly reduced wall-clock training times, making it a viable option for real-time applications.In conclusion, Mixture-of-Transformers presents an innovative approach to multi-modal modeling by offering an efficient, scalable solution for integrating diverse data types within a single framework. Through a sparse architecture that leverages modality-specific processing, MoT significantly reduces computational load while delivering robust performance across various tasks. This breakthrough could transform the landscape of AI, enabling more accessible, resource-efficient models for advanced multi-modal applications.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Upcoming Live LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Kommentare ·0 Anteile ·97 Ansichten
  • Elon Musks Own AI Flags Him as a Leading Misinformation Source on X
    towardsai.net
    Elon Musks Own AI Flags Him as a Leading Misinformation Source on X 0 like November 13, 2024Share this postAuthor(s): Get The Gist Originally published on Towards AI. Plus: Nvidia is Building Japans Most Advanced AI SupercomputerThis member-only story is on us. Upgrade to access all of Medium.Welcome to Get The Gist, where every weekday we share an easy-to-read summary of the latest and greatest developments in AI news, innovations, and trends all delivered in under 5 minutes! In todays edition:Nvidia is Building Japans Most Advanced AI SupercomputerGoogle Nest Cameras Get Smarter with New Gemini AI FeaturesGrok Flags Musk as a Leading Misinformation Source on XAmazon to Launch Its New AI ChipAnd more AI news.Image by: NvidiaThe Gist: SoftBank, in partnership with NVIDIA, is building Japans most powerful AI supercomputer, aiming to lead in AI innovation, telecom, and industrial growth. This groundbreaking infrastructure promises new revenue streams and transformative applications across industries.Key Details:SoftBanks AI supercomputer, based on NVIDIAs Blackwell platform, will be the most powerful in Japan, supporting AI development for research, universities, and businesses.Using the NVIDIA AI Aerial platform, SoftBank has piloted the first AI-integrated 5G network, unlocking multi-billion dollar revenue opportunities for telecom.SoftBanks planned AI marketplace, powered by NVIDIA AI Enterprise, will provide secure, local AI services to industries, enabling growth in fields like healthcare, robotics, and transportation.Image by: NeowinThe Gist: Starting next week, Google Nest cameras will roll out advanced AI Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Kommentare ·0 Anteile ·122 Ansichten