• Ah, the magical world of 3D printing! Who would have thought that the secrets of crafting quality cosplay props could be unlocked with just a printer and a little patience? It’s almost like we’re living in a sci-fi movie, but instead of flying cars and robot servants, we get to print our own Spider-Man masks and Thor's hammers. Because, let’s face it, who needs actual craftsmanship when you have a 3D printer and a dash of delusion?

    Picture this: You walk into a convention, proudly wearing your freshly printed Spider-Man mask—its edges rough and its colors a little off, reminiscent of the last time you tried your hand at a DIY project. You can almost hear the gasps of admiration from fellow cosplayers, or maybe that’s just them trying to suppress their laughter. But hey, you saved a ton of time with that “minimal post-processing”! Who knew that “minimal” could also mean “looks like it was chewed up by a printer that’s had one too many?”

    And let’s not forget about Thor’s hammer, Mjölnir. Because nothing says “God of Thunder” quite like a clunky piece of plastic that could double as a doorstop. The best part? You can claim it’s a unique interpretation of Asgardian craftsmanship. Who needs authenticity when you have the power of 3D printing? Just make sure to avoid any actual thunder storms—after all, we wouldn’t want your new prop to melt in the rain, or worse, have it be mistaken for a water gun!

    Now, if you’re worried about how long it takes to print your masterpiece, fear not! You can always get lost in the mesmerizing whirl of the printer’s head, contemplating the deeper meaning of life while waiting for hours to see if your creation will actually resemble the image you downloaded from the internet. Spoiler alert: it probably won’t, but that’s part of the fun, right?

    Oh, and let’s not forget the joy of explaining to your friends that you “crafted” these pieces with care, while they’re blissfully unaware that you merely pressed a few buttons and hoped for the best. After all, why invest time in traditional crafting techniques when you can embrace the magic of technology?

    So, grab your 3D printer and let your imagination run wild! Who needs actual skills when you can print your dreams, layer by layer, with a side of mediocre results? Just remember, in the world of cosplay, it’s not about the journey; it’s about how many likes you can get on that Instagram post of you holding your half-finished Thor’s hammer like it’s the Holy Grail of cosplay.

    #3DPrinting #CosplayProps #SpiderMan #ThorsHammer #DIYDelusions
    Ah, the magical world of 3D printing! Who would have thought that the secrets of crafting quality cosplay props could be unlocked with just a printer and a little patience? It’s almost like we’re living in a sci-fi movie, but instead of flying cars and robot servants, we get to print our own Spider-Man masks and Thor's hammers. Because, let’s face it, who needs actual craftsmanship when you have a 3D printer and a dash of delusion? Picture this: You walk into a convention, proudly wearing your freshly printed Spider-Man mask—its edges rough and its colors a little off, reminiscent of the last time you tried your hand at a DIY project. You can almost hear the gasps of admiration from fellow cosplayers, or maybe that’s just them trying to suppress their laughter. But hey, you saved a ton of time with that “minimal post-processing”! Who knew that “minimal” could also mean “looks like it was chewed up by a printer that’s had one too many?” And let’s not forget about Thor’s hammer, Mjölnir. Because nothing says “God of Thunder” quite like a clunky piece of plastic that could double as a doorstop. The best part? You can claim it’s a unique interpretation of Asgardian craftsmanship. Who needs authenticity when you have the power of 3D printing? Just make sure to avoid any actual thunder storms—after all, we wouldn’t want your new prop to melt in the rain, or worse, have it be mistaken for a water gun! Now, if you’re worried about how long it takes to print your masterpiece, fear not! You can always get lost in the mesmerizing whirl of the printer’s head, contemplating the deeper meaning of life while waiting for hours to see if your creation will actually resemble the image you downloaded from the internet. Spoiler alert: it probably won’t, but that’s part of the fun, right? Oh, and let’s not forget the joy of explaining to your friends that you “crafted” these pieces with care, while they’re blissfully unaware that you merely pressed a few buttons and hoped for the best. After all, why invest time in traditional crafting techniques when you can embrace the magic of technology? So, grab your 3D printer and let your imagination run wild! Who needs actual skills when you can print your dreams, layer by layer, with a side of mediocre results? Just remember, in the world of cosplay, it’s not about the journey; it’s about how many likes you can get on that Instagram post of you holding your half-finished Thor’s hammer like it’s the Holy Grail of cosplay. #3DPrinting #CosplayProps #SpiderMan #ThorsHammer #DIYDelusions
    How to 3D print cosplay props: From Spider-Man masks to Thor's hammer
    Start crafting quality cosplay props with minimal post-processing.
    Like
    Love
    Wow
    Sad
    Angry
    590
    1 Комментарии 0 Поделились 0 предпросмотр
  • Harnessing Silhouette for Dramatic Storytelling

    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling.

    The Art of Silhouettes
    A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories.
    Capturing the Perfect Silhouette
    Ideal Conditions

    Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes.
    Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography.

    Camera Settings

    Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette.
    Choose a narrower apertureto maintain clear, sharp outlines, ensuring your silhouette remains distinct.

    Composition Techniques for Dramatic Impact

    Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes.
    Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively.
    Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject.

    Enhancing Storytelling through Silhouettes
    Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama.
    Post-Processing Tips

    Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence.
    Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy.

    Creative Applications
    Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs.
    Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows.
    Extended reading: Creating depth and drama with moody photography
    The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px.
    #harnessing #silhouette #dramatic #storytelling
    Harnessing Silhouette for Dramatic Storytelling
    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling. The Art of Silhouettes A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories. Capturing the Perfect Silhouette Ideal Conditions Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes. Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography. Camera Settings Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette. Choose a narrower apertureto maintain clear, sharp outlines, ensuring your silhouette remains distinct. Composition Techniques for Dramatic Impact Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes. Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively. Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject. Enhancing Storytelling through Silhouettes Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama. Post-Processing Tips Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence. Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy. Creative Applications Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs. Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows. Extended reading: Creating depth and drama with moody photography The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px. #harnessing #silhouette #dramatic #storytelling
    ISO.500PX.COM
    Harnessing Silhouette for Dramatic Storytelling
    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling. The Art of Silhouettes A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories. Capturing the Perfect Silhouette Ideal Conditions Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes. Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography. Camera Settings Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette. Choose a narrower aperture (higher f-number) to maintain clear, sharp outlines, ensuring your silhouette remains distinct. Composition Techniques for Dramatic Impact Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes. Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively. Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject. Enhancing Storytelling through Silhouettes Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama. Post-Processing Tips Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence. Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy. Creative Applications Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs. Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows. Extended reading: Creating depth and drama with moody photography The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px.
    0 Комментарии 0 Поделились 0 предпросмотр
  • I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off

    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for a discount. This deal applies to all color options except Peony.I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great.
    details
    View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs + more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queriesand it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless chargingand 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is it's hard not to recommend Google's Pixel 9, especially while it's on sale at off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions.
    Show more
    Featured reviews
    #recommend #pixel #most #people #looking
    I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for a discount. This deal applies to all color options except Peony.I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great. details View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs + more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queriesand it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless chargingand 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is it's hard not to recommend Google's Pixel 9, especially while it's on sale at off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more Featured reviews #recommend #pixel #most #people #looking
    WWW.ZDNET.COM
    I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at $800. It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for $649, a $250 discount. This deal applies to all color options except Peony (pink).I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at $799. Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great. details View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs $200+ more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queries ("What's the best place to live near New York City that's relatively affordable," "How many stars are in the sky -- wait, in the galaxy?") and it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless charging (not Qi2, notably) and 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is $800, it's hard not to recommend Google's Pixel 9, especially while it's on sale at $250 off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more Featured reviews
    Like
    Love
    Wow
    Angry
    Sad
    710
    0 Комментарии 0 Поделились 0 предпросмотр
  • Crafting Atmospheric Images with Fog and Mist

    Fog and mist offer photographers an exceptional opportunity to create deeply atmospheric, moody, and mysterious imagery. By embracing these unique weather conditions, you can transform ordinary scenes into captivating visual stories filled with depth, emotion, and intrigue. Here’s how to effectively harness fog and mist to elevate your photography.

    Understanding the Appeal of Fog and Mist
    Fog and mist naturally diffuse light, softening contrasts and textures within a scene. This soft diffusion creates a dreamlike ambiance and adds emotional depth to photographs, often evoking feelings of tranquility, solitude, or mystery. By obscuring and revealing elements selectively, fog and mist invite viewers to engage deeply with the visual narrative, filling in unseen details with imagination.
    Ideal Conditions and Timing
    The most atmospheric fog and mist usually occur during early mornings or evenings, especially near bodies of water or in valleys and lowlands. Paying close attention to weather forecasts can help you predict ideal conditions. Early preparation and scouting locations ahead of time ensure you’re ready when the perfect atmospheric conditions arise.

    Composition Techniques for Foggy Scenes
    Creating impactful foggy compositions involves thoughtful techniques:

    Layers and Depth: Use the fog’s varying densities to emphasize depth. Layering foreground, midground, and background elements adds visual complexity and interest.
    Silhouettes and Shapes: Fog reduces detail, emphasizing strong shapes and silhouettes. Compose your images around distinctive shapes, trees, or structures to anchor your photograph.
    Simplify the Frame: Minimalism is especially effective in foggy conditions. Embrace simplicity by isolating single elements or subjects against misty backdrops for dramatic effect.

    Lighting and Exposure Considerations
    Fog significantly impacts exposure and lighting conditions:

    Soft Light: The diffused, gentle lighting conditions in fog and mist reduce harsh shadows, creating flattering, ethereal images.
    Exposure Compensation: Fog often tricks camera meters into underexposing scenes. Consider slightly increasing your exposure compensation to accurately capture the brightness and subtle details of foggy conditions.

    Creative Opportunities with Fog and Mist
    Fog and mist open diverse creative possibilities:

    Black-and-White Photography: Foggy conditions lend themselves exceptionally well to monochrome photography, highlighting contrasts, textures, and shapes dramatically.
    Color Tones and Mood: Color images in foggy conditions can carry gentle pastel tones or cool hues, enhancing the atmospheric and emotional impact of your imagery.

    Enhancing Foggy Images Through Post-Processing
    Post-processing can refine foggy scenes:

    Contrast and Clarity Adjustments: Fine-tune contrast and clarity subtly to maintain the softness and mood without losing important detail.
    Selective Sharpening: Apply selective sharpening to key elements or subjects, ensuring they stand out within the foggy environment without diminishing the atmospheric quality.

    Fog and mist provide photographers with unique conditions to craft images rich in mood, narrative, and visual intrigue. By thoughtfully considering composition, timing, lighting, and post-processing, you can harness the power of fog and mist to produce atmospheric photography that deeply resonates with viewers. Embrace these ethereal elements, and transform everyday scenes into extraordinary visual stories.
    Extended reading: Alpine views: 12 breathtaking mountainscapes to celebrate the chilly season
    The post Crafting Atmospheric Images with Fog and Mist appeared first on 500px.
    #crafting #atmospheric #images #with #fog
    Crafting Atmospheric Images with Fog and Mist
    Fog and mist offer photographers an exceptional opportunity to create deeply atmospheric, moody, and mysterious imagery. By embracing these unique weather conditions, you can transform ordinary scenes into captivating visual stories filled with depth, emotion, and intrigue. Here’s how to effectively harness fog and mist to elevate your photography. Understanding the Appeal of Fog and Mist Fog and mist naturally diffuse light, softening contrasts and textures within a scene. This soft diffusion creates a dreamlike ambiance and adds emotional depth to photographs, often evoking feelings of tranquility, solitude, or mystery. By obscuring and revealing elements selectively, fog and mist invite viewers to engage deeply with the visual narrative, filling in unseen details with imagination. Ideal Conditions and Timing The most atmospheric fog and mist usually occur during early mornings or evenings, especially near bodies of water or in valleys and lowlands. Paying close attention to weather forecasts can help you predict ideal conditions. Early preparation and scouting locations ahead of time ensure you’re ready when the perfect atmospheric conditions arise. Composition Techniques for Foggy Scenes Creating impactful foggy compositions involves thoughtful techniques: Layers and Depth: Use the fog’s varying densities to emphasize depth. Layering foreground, midground, and background elements adds visual complexity and interest. Silhouettes and Shapes: Fog reduces detail, emphasizing strong shapes and silhouettes. Compose your images around distinctive shapes, trees, or structures to anchor your photograph. Simplify the Frame: Minimalism is especially effective in foggy conditions. Embrace simplicity by isolating single elements or subjects against misty backdrops for dramatic effect. Lighting and Exposure Considerations Fog significantly impacts exposure and lighting conditions: Soft Light: The diffused, gentle lighting conditions in fog and mist reduce harsh shadows, creating flattering, ethereal images. Exposure Compensation: Fog often tricks camera meters into underexposing scenes. Consider slightly increasing your exposure compensation to accurately capture the brightness and subtle details of foggy conditions. Creative Opportunities with Fog and Mist Fog and mist open diverse creative possibilities: Black-and-White Photography: Foggy conditions lend themselves exceptionally well to monochrome photography, highlighting contrasts, textures, and shapes dramatically. Color Tones and Mood: Color images in foggy conditions can carry gentle pastel tones or cool hues, enhancing the atmospheric and emotional impact of your imagery. Enhancing Foggy Images Through Post-Processing Post-processing can refine foggy scenes: Contrast and Clarity Adjustments: Fine-tune contrast and clarity subtly to maintain the softness and mood without losing important detail. Selective Sharpening: Apply selective sharpening to key elements or subjects, ensuring they stand out within the foggy environment without diminishing the atmospheric quality. Fog and mist provide photographers with unique conditions to craft images rich in mood, narrative, and visual intrigue. By thoughtfully considering composition, timing, lighting, and post-processing, you can harness the power of fog and mist to produce atmospheric photography that deeply resonates with viewers. Embrace these ethereal elements, and transform everyday scenes into extraordinary visual stories. Extended reading: Alpine views: 12 breathtaking mountainscapes to celebrate the chilly season The post Crafting Atmospheric Images with Fog and Mist appeared first on 500px. #crafting #atmospheric #images #with #fog
    ISO.500PX.COM
    Crafting Atmospheric Images with Fog and Mist
    Fog and mist offer photographers an exceptional opportunity to create deeply atmospheric, moody, and mysterious imagery. By embracing these unique weather conditions, you can transform ordinary scenes into captivating visual stories filled with depth, emotion, and intrigue. Here’s how to effectively harness fog and mist to elevate your photography. Understanding the Appeal of Fog and Mist Fog and mist naturally diffuse light, softening contrasts and textures within a scene. This soft diffusion creates a dreamlike ambiance and adds emotional depth to photographs, often evoking feelings of tranquility, solitude, or mystery. By obscuring and revealing elements selectively, fog and mist invite viewers to engage deeply with the visual narrative, filling in unseen details with imagination. Ideal Conditions and Timing The most atmospheric fog and mist usually occur during early mornings or evenings, especially near bodies of water or in valleys and lowlands. Paying close attention to weather forecasts can help you predict ideal conditions. Early preparation and scouting locations ahead of time ensure you’re ready when the perfect atmospheric conditions arise. Composition Techniques for Foggy Scenes Creating impactful foggy compositions involves thoughtful techniques: Layers and Depth: Use the fog’s varying densities to emphasize depth. Layering foreground, midground, and background elements adds visual complexity and interest. Silhouettes and Shapes: Fog reduces detail, emphasizing strong shapes and silhouettes. Compose your images around distinctive shapes, trees, or structures to anchor your photograph. Simplify the Frame: Minimalism is especially effective in foggy conditions. Embrace simplicity by isolating single elements or subjects against misty backdrops for dramatic effect. Lighting and Exposure Considerations Fog significantly impacts exposure and lighting conditions: Soft Light: The diffused, gentle lighting conditions in fog and mist reduce harsh shadows, creating flattering, ethereal images. Exposure Compensation: Fog often tricks camera meters into underexposing scenes. Consider slightly increasing your exposure compensation to accurately capture the brightness and subtle details of foggy conditions. Creative Opportunities with Fog and Mist Fog and mist open diverse creative possibilities: Black-and-White Photography: Foggy conditions lend themselves exceptionally well to monochrome photography, highlighting contrasts, textures, and shapes dramatically. Color Tones and Mood: Color images in foggy conditions can carry gentle pastel tones or cool hues, enhancing the atmospheric and emotional impact of your imagery. Enhancing Foggy Images Through Post-Processing Post-processing can refine foggy scenes: Contrast and Clarity Adjustments: Fine-tune contrast and clarity subtly to maintain the softness and mood without losing important detail. Selective Sharpening: Apply selective sharpening to key elements or subjects, ensuring they stand out within the foggy environment without diminishing the atmospheric quality. Fog and mist provide photographers with unique conditions to craft images rich in mood, narrative, and visual intrigue. By thoughtfully considering composition, timing, lighting, and post-processing, you can harness the power of fog and mist to produce atmospheric photography that deeply resonates with viewers. Embrace these ethereal elements, and transform everyday scenes into extraordinary visual stories. Extended reading: Alpine views: 12 breathtaking mountainscapes to celebrate the chilly season The post Crafting Atmospheric Images with Fog and Mist appeared first on 500px.
    Like
    Love
    Wow
    Sad
    Angry
    273
    0 Комментарии 0 Поделились 0 предпросмотр
  • Why I recommend this OnePlus phone over the S25 Ultra - especially at this new low price

    ZDNET's key takeaways The OnePlus 13 is a snappy, nearly no-compromise phone that starts at A Snapdragon 8 Elite, paired with a 6,000mAh battery and 80W fast charging, is a recipe for endurance success. IP69 is almost excessive, but you'll appreciate it when least expected. at Best Buy apr / 2025Over at OnePlus' website, both OnePlus 13 models are on sale for off, and each purchase comes with a free gift. Options include a OnePlus Nord Buds 3 Pro and a Sandstone Magnetic Case.It's not often that I review a smartphone in the first few calendar weeks and feel confident in calling it a "Phone of the Year" contender. But when I tested the OnePlus 13 back in January, that's precisely what happened.Whether Google finally launches a Pixel Pro Fold with a flagship camera system this summer, or Apple releases a thinner iPhone in the fall, the OnePlus 13 will likely still be on my mind when the year-end nominations are due.Also: I changed 10 OnePlus phone settings to significantly improve the user experienceThere's a lot going for the latest flagship phone, from the more secureultrasonic fingerprint sensor to the IP69 rating to the 6,000mAh Silicon NanoStack battery. It's also one of the first phones in North America to feature Qualcomm's new Snapdragon 8 Elite chip, which promises improvements to performance, efficiency, and AI workloads.I tested the OnePlus 13 alongside my iPhone 16 Pro Max and Google Pixel 9 Pro XL to see exactly how the Android phone stacked up against one of the best phones from 2024. In a few ways, the OnePlus 13 falls short, but in many ways, it puts the iPhone and Pixel to shame.When I first unboxed the OnePlus 13 and held it in my hand, my reaction was audible. Allow me to geek out here: The slightly curved glass, the slimness of the phone, and the overall appearance made my then-four-month-old iPhone look and feel outdated. It's as if OnePlus made the iPhone 17 Air before Apple did.However, what sells the OnePlus 13 design for me is the new Midnight Ocean color, which flaunts a vegan-leather backing that makes the phone visually distinctive and more comfortable to hold than its glass-only predecessors. The texture isn't as rough and grippy as actual leather, though, so I'd be interested in seeing how it ages over the year.Kerry Wan/ZDNETIf you were hoping the first major Android phone of 2025 would feature Qi2 wireless charging, I have good news and bad news. While the OnePlus 13 doesn't have an in-body Qi2 charging coil, meaning MagSafeaccessories won't attach directly to the back of the device, OnePlus has embedded magnetic guides within its protective covers, enabling users to take advantage of the accessories so long as the OnePlus 13 is encased. It's a burdenless workaround, but one that hopefully won't be necessary with the next model.For what it's worth, since publishing this review, several other Android phones have been released, including the Samsung Galaxy S25 Ultra, Nothing Phone 3a Pro, and Motorola Razr Ultra -- none of which feature Qi2 wireless charging.For years, one aspect that's held OnePlus phones back is the water and dust resistance rating, or lack thereof. With the OnePlus 13, the company is finally taking a stronger stance on the endurance standard, certifying the phone with an IP69 rating. It's a step above the IP68 ratings we commonly see on competing devices, and allows the OnePlus 13 to withstand high-pressure, high-temperature water jets and humidity changes.Also: 5 habit trackers on Android that can reveal your patterns - and motivate you to changeIn practice, this means the OnePlus 13 can function properly even if you leave it in your washer and dryer, dishwasher, or a pot of boiling soup. The IP69 rating feels very much like a flex, but it's a benefit that users will appreciate when they least expect it. Kerry Wan/ZDNETPowering the device is a Qualcomm Snapdragon 8 Elite chip that, from my months of usage, has some noticeable strengths and weaknesses. For day-to-day usage, such as bouncing between productivity apps, definitely not scrolling through TikTok, and taking photos and videos, the processor handles tasks gracefully. It helps that OxygenOS 15, based on the latest version of Android, has some of the smoothest animations I've seen on a phone.Also: I found a Bluetooth tracker for Android users that functions better than AirTagsBut once you fire up graphics-intensive applications like Adobe Premiere Rush and Honkai Star Rail, you'll notice some stuttering as the higher heat development leads to throttling performance. This isn't a dealbreaker, per se, as the nerfs are only apparent when you're using the device for a prolonged time.I've actually been using the OnePlus 13 quite liberally, as the 6,000mAh Silicon NanoStack battery has kept my review unit running for at least a day and a half per charge. That's unseen with any other mainstream phone in the US market, and I fully expect more manufacturers to adopt silicon batteries for their greater energy density. If not that, copy the 80W fast charging or 50W wireless charging; they're quite the revelation. Kerry Wan/ZDNETOn the camera front, the OnePlus 13, with its triple camera setup, has been a reliable shooter throughout most of my days. While the Sony LYT-808 sensor isn't on par with the one-inch sensors I've tested on international phones like the Xiaomi 15 Ultra, it does an excellent job of capturing details and finishing the output vividly. If you're a fan of sharp, bright, and slightly oversaturated imagery, then the OnePlus 13 will serve you well.Also: The best Android phones to buy in 2025Where the camera sensors fall short is in post-processing and AI-tuning features. For example, the phone leans heavily on computational photography to contextualize details when taking far-distance shots. This sometimes leads to images with an artificial, over-smoothing filter. But when the backend software works, it can reproduce details that you probably didn't think you'd capture in the first place.ZDNET's buying adviceFor a starting price of the OnePlus 13 delivers some seriously good value -- possibly the best of all the major flagship phones I've tested so far this year. The company has improved the device in almost every way, from the design to the performance to its accessory ecosystem. I just wish OnePlus offered more extensive software support, as the OnePlus 13 will only receive four years of Android OS updates and six years of security updates. Samsung, Google, and Apple offer at least seven years of OS support. If you can shoulder the shorter promise of longevity, this is one of the easiest phones for me to recommend right now. Why the OnePlus 13 gets an Editors' Choice award We awarded the OnePlus 13 an Editors' Choice because it nails all the fundamentals of a great smartphone experience while leading the market in some regards, such as battery and charging, durability, and design. The specs this year are noticeably improved compared to its predecessor, the OnePlus 12, with a faster processor, lighter build, larger battery capacity, and a more capable camera system. Most importantly, the OnePlus 13 starts at undercutting its closest competitors like the Google Pixel 9 Pro XL and Samsung Galaxy S25 Ultra.
    Show more
    When will this deal expire? As per OnePlus, this offer will end on June 8, 2025.However, deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on a deal, but don't fret -- we constantly find new chances to save and share them with you on ZDNET.com. 
    Show more
    What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions.
    Show more
    This story was originally published on January 7, 2025, and was updated on June 1, 2025, adding information for a new June discount.Featured reviews
    #why #recommend #this #oneplus #phone
    Why I recommend this OnePlus phone over the S25 Ultra - especially at this new low price
    ZDNET's key takeaways The OnePlus 13 is a snappy, nearly no-compromise phone that starts at A Snapdragon 8 Elite, paired with a 6,000mAh battery and 80W fast charging, is a recipe for endurance success. IP69 is almost excessive, but you'll appreciate it when least expected. at Best Buy apr / 2025Over at OnePlus' website, both OnePlus 13 models are on sale for off, and each purchase comes with a free gift. Options include a OnePlus Nord Buds 3 Pro and a Sandstone Magnetic Case.It's not often that I review a smartphone in the first few calendar weeks and feel confident in calling it a "Phone of the Year" contender. But when I tested the OnePlus 13 back in January, that's precisely what happened.Whether Google finally launches a Pixel Pro Fold with a flagship camera system this summer, or Apple releases a thinner iPhone in the fall, the OnePlus 13 will likely still be on my mind when the year-end nominations are due.Also: I changed 10 OnePlus phone settings to significantly improve the user experienceThere's a lot going for the latest flagship phone, from the more secureultrasonic fingerprint sensor to the IP69 rating to the 6,000mAh Silicon NanoStack battery. It's also one of the first phones in North America to feature Qualcomm's new Snapdragon 8 Elite chip, which promises improvements to performance, efficiency, and AI workloads.I tested the OnePlus 13 alongside my iPhone 16 Pro Max and Google Pixel 9 Pro XL to see exactly how the Android phone stacked up against one of the best phones from 2024. In a few ways, the OnePlus 13 falls short, but in many ways, it puts the iPhone and Pixel to shame.When I first unboxed the OnePlus 13 and held it in my hand, my reaction was audible. Allow me to geek out here: The slightly curved glass, the slimness of the phone, and the overall appearance made my then-four-month-old iPhone look and feel outdated. It's as if OnePlus made the iPhone 17 Air before Apple did.However, what sells the OnePlus 13 design for me is the new Midnight Ocean color, which flaunts a vegan-leather backing that makes the phone visually distinctive and more comfortable to hold than its glass-only predecessors. The texture isn't as rough and grippy as actual leather, though, so I'd be interested in seeing how it ages over the year.Kerry Wan/ZDNETIf you were hoping the first major Android phone of 2025 would feature Qi2 wireless charging, I have good news and bad news. While the OnePlus 13 doesn't have an in-body Qi2 charging coil, meaning MagSafeaccessories won't attach directly to the back of the device, OnePlus has embedded magnetic guides within its protective covers, enabling users to take advantage of the accessories so long as the OnePlus 13 is encased. It's a burdenless workaround, but one that hopefully won't be necessary with the next model.For what it's worth, since publishing this review, several other Android phones have been released, including the Samsung Galaxy S25 Ultra, Nothing Phone 3a Pro, and Motorola Razr Ultra -- none of which feature Qi2 wireless charging.For years, one aspect that's held OnePlus phones back is the water and dust resistance rating, or lack thereof. With the OnePlus 13, the company is finally taking a stronger stance on the endurance standard, certifying the phone with an IP69 rating. It's a step above the IP68 ratings we commonly see on competing devices, and allows the OnePlus 13 to withstand high-pressure, high-temperature water jets and humidity changes.Also: 5 habit trackers on Android that can reveal your patterns - and motivate you to changeIn practice, this means the OnePlus 13 can function properly even if you leave it in your washer and dryer, dishwasher, or a pot of boiling soup. The IP69 rating feels very much like a flex, but it's a benefit that users will appreciate when they least expect it. Kerry Wan/ZDNETPowering the device is a Qualcomm Snapdragon 8 Elite chip that, from my months of usage, has some noticeable strengths and weaknesses. For day-to-day usage, such as bouncing between productivity apps, definitely not scrolling through TikTok, and taking photos and videos, the processor handles tasks gracefully. It helps that OxygenOS 15, based on the latest version of Android, has some of the smoothest animations I've seen on a phone.Also: I found a Bluetooth tracker for Android users that functions better than AirTagsBut once you fire up graphics-intensive applications like Adobe Premiere Rush and Honkai Star Rail, you'll notice some stuttering as the higher heat development leads to throttling performance. This isn't a dealbreaker, per se, as the nerfs are only apparent when you're using the device for a prolonged time.I've actually been using the OnePlus 13 quite liberally, as the 6,000mAh Silicon NanoStack battery has kept my review unit running for at least a day and a half per charge. That's unseen with any other mainstream phone in the US market, and I fully expect more manufacturers to adopt silicon batteries for their greater energy density. If not that, copy the 80W fast charging or 50W wireless charging; they're quite the revelation. Kerry Wan/ZDNETOn the camera front, the OnePlus 13, with its triple camera setup, has been a reliable shooter throughout most of my days. While the Sony LYT-808 sensor isn't on par with the one-inch sensors I've tested on international phones like the Xiaomi 15 Ultra, it does an excellent job of capturing details and finishing the output vividly. If you're a fan of sharp, bright, and slightly oversaturated imagery, then the OnePlus 13 will serve you well.Also: The best Android phones to buy in 2025Where the camera sensors fall short is in post-processing and AI-tuning features. For example, the phone leans heavily on computational photography to contextualize details when taking far-distance shots. This sometimes leads to images with an artificial, over-smoothing filter. But when the backend software works, it can reproduce details that you probably didn't think you'd capture in the first place.ZDNET's buying adviceFor a starting price of the OnePlus 13 delivers some seriously good value -- possibly the best of all the major flagship phones I've tested so far this year. The company has improved the device in almost every way, from the design to the performance to its accessory ecosystem. I just wish OnePlus offered more extensive software support, as the OnePlus 13 will only receive four years of Android OS updates and six years of security updates. Samsung, Google, and Apple offer at least seven years of OS support. If you can shoulder the shorter promise of longevity, this is one of the easiest phones for me to recommend right now. Why the OnePlus 13 gets an Editors' Choice award We awarded the OnePlus 13 an Editors' Choice because it nails all the fundamentals of a great smartphone experience while leading the market in some regards, such as battery and charging, durability, and design. The specs this year are noticeably improved compared to its predecessor, the OnePlus 12, with a faster processor, lighter build, larger battery capacity, and a more capable camera system. Most importantly, the OnePlus 13 starts at undercutting its closest competitors like the Google Pixel 9 Pro XL and Samsung Galaxy S25 Ultra. Show more When will this deal expire? As per OnePlus, this offer will end on June 8, 2025.However, deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on a deal, but don't fret -- we constantly find new chances to save and share them with you on ZDNET.com.  Show more What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more This story was originally published on January 7, 2025, and was updated on June 1, 2025, adding information for a new June discount.Featured reviews #why #recommend #this #oneplus #phone
    WWW.ZDNET.COM
    Why I recommend this OnePlus phone over the S25 Ultra - especially at this new low price
    ZDNET's key takeaways The OnePlus 13 is a snappy, nearly no-compromise phone that starts at $899. A Snapdragon 8 Elite, paired with a 6,000mAh battery and 80W fast charging, is a recipe for endurance success. IP69 is almost excessive, but you'll appreciate it when least expected. $999.99 at Best Buy apr / 2025Over at OnePlus' website, both OnePlus 13 models are on sale for $50 off, and each purchase comes with a free gift. Options include a OnePlus Nord Buds 3 Pro and a Sandstone Magnetic Case.It's not often that I review a smartphone in the first few calendar weeks and feel confident in calling it a "Phone of the Year" contender. But when I tested the OnePlus 13 back in January, that's precisely what happened.Whether Google finally launches a Pixel Pro Fold with a flagship camera system this summer, or Apple releases a thinner iPhone in the fall, the OnePlus 13 will likely still be on my mind when the year-end nominations are due.Also: I changed 10 OnePlus phone settings to significantly improve the user experienceThere's a lot going for the latest flagship phone, from the more secure (and reliable) ultrasonic fingerprint sensor to the IP69 rating to the 6,000mAh Silicon NanoStack battery. It's also one of the first phones in North America to feature Qualcomm's new Snapdragon 8 Elite chip, which promises improvements to performance, efficiency, and AI workloads.I tested the OnePlus 13 alongside my iPhone 16 Pro Max and Google Pixel 9 Pro XL to see exactly how the Android phone stacked up against one of the best phones from 2024. In a few ways, the OnePlus 13 falls short, but in many ways, it puts the iPhone and Pixel to shame.When I first unboxed the OnePlus 13 and held it in my hand, my reaction was audible. Allow me to geek out here: The slightly curved glass, the slimness of the phone, and the overall appearance made my then-four-month-old iPhone look and feel outdated. It's as if OnePlus made the iPhone 17 Air before Apple did.However, what sells the OnePlus 13 design for me is the new Midnight Ocean color, which flaunts a vegan-leather backing that makes the phone visually distinctive and more comfortable to hold than its glass-only predecessors. The texture isn't as rough and grippy as actual leather, though, so I'd be interested in seeing how it ages over the year. (April update: The textured backing is holding up well, save for a few dark spots on the corners, likely caused by the phone rubbing against my palms.) Kerry Wan/ZDNETIf you were hoping the first major Android phone of 2025 would feature Qi2 wireless charging, I have good news and bad news. While the OnePlus 13 doesn't have an in-body Qi2 charging coil, meaning MagSafe (and similar) accessories won't attach directly to the back of the device, OnePlus has embedded magnetic guides within its protective covers, enabling users to take advantage of the accessories so long as the OnePlus 13 is encased. It's a burdenless workaround, but one that hopefully won't be necessary with the next model.For what it's worth, since publishing this review, several other Android phones have been released, including the Samsung Galaxy S25 Ultra, Nothing Phone 3a Pro, and Motorola Razr Ultra -- none of which feature Qi2 wireless charging.For years, one aspect that's held OnePlus phones back is the water and dust resistance rating, or lack thereof. With the OnePlus 13, the company is finally taking a stronger stance on the endurance standard, certifying the phone with an IP69 rating. It's a step above the IP68 ratings we commonly see on competing devices, and allows the OnePlus 13 to withstand high-pressure, high-temperature water jets and humidity changes.Also: 5 habit trackers on Android that can reveal your patterns - and motivate you to changeIn practice, this means the OnePlus 13 can function properly even if you leave it in your washer and dryer, dishwasher, or a pot of boiling soup. The IP69 rating feels very much like a flex, but it's a benefit that users will appreciate when they least expect it. Kerry Wan/ZDNETPowering the device is a Qualcomm Snapdragon 8 Elite chip that, from my months of usage, has some noticeable strengths and weaknesses. For day-to-day usage, such as bouncing between productivity apps, definitely not scrolling through TikTok, and taking photos and videos, the processor handles tasks gracefully. It helps that OxygenOS 15, based on the latest version of Android, has some of the smoothest animations I've seen on a phone.Also: I found a Bluetooth tracker for Android users that functions better than AirTags (and it's cheaper)But once you fire up graphics-intensive applications like Adobe Premiere Rush and Honkai Star Rail, you'll notice some stuttering as the higher heat development leads to throttling performance. This isn't a dealbreaker, per se, as the nerfs are only apparent when you're using the device for a prolonged time.I've actually been using the OnePlus 13 quite liberally, as the 6,000mAh Silicon NanoStack battery has kept my review unit running for at least a day and a half per charge. That's unseen with any other mainstream phone in the US market, and I fully expect more manufacturers to adopt silicon batteries for their greater energy density. If not that, copy the 80W fast charging or 50W wireless charging; they're quite the revelation. Kerry Wan/ZDNETOn the camera front, the OnePlus 13, with its triple camera setup (50MP wide, ultrawide, and telephoto), has been a reliable shooter throughout most of my days. While the Sony LYT-808 sensor isn't on par with the one-inch sensors I've tested on international phones like the Xiaomi 15 Ultra, it does an excellent job of capturing details and finishing the output vividly. If you're a fan of sharp, bright, and slightly oversaturated imagery (read: more colorful than how the actual subject appears), then the OnePlus 13 will serve you well.Also: The best Android phones to buy in 2025Where the camera sensors fall short is in post-processing and AI-tuning features. For example, the phone leans heavily on computational photography to contextualize details when taking far-distance shots. This sometimes leads to images with an artificial, over-smoothing filter. But when the backend software works, it can reproduce details that you probably didn't think you'd capture in the first place.ZDNET's buying adviceFor a starting price of $899, the OnePlus 13 delivers some seriously good value -- possibly the best of all the major flagship phones I've tested so far this year. The company has improved the device in almost every way, from the design to the performance to its accessory ecosystem. I just wish OnePlus offered more extensive software support, as the OnePlus 13 will only receive four years of Android OS updates and six years of security updates. Samsung, Google, and Apple offer at least seven years of OS support. If you can shoulder the shorter promise of longevity, this is one of the easiest phones for me to recommend right now. Why the OnePlus 13 gets an Editors' Choice award We awarded the OnePlus 13 an Editors' Choice because it nails all the fundamentals of a great smartphone experience while leading the market in some regards, such as battery and charging, durability, and design. The specs this year are noticeably improved compared to its predecessor, the OnePlus 12, with a faster processor, lighter build, larger battery capacity, and a more capable camera system. Most importantly, the OnePlus 13 starts at $899, undercutting its closest competitors like the Google Pixel 9 Pro XL and Samsung Galaxy S25 Ultra. Show more When will this deal expire? As per OnePlus, this offer will end on June 8, 2025.However, deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on a deal, but don't fret -- we constantly find new chances to save and share them with you on ZDNET.com.  Show more What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more This story was originally published on January 7, 2025, and was updated on June 1, 2025, adding information for a new June discount.Featured reviews
    0 Комментарии 0 Поделились 0 предпросмотр
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Pick up these helpful tips on advanced profiling

    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer. Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding”errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools beforeor are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available, can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan. The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM.Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device. With the industry moving toward more types of chipsets, how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per secondto get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations. Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack. Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C#code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Designto get the most out of Unity’s Entity Component System. Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording.
    #pick #these #helpful #tips #advanced
    Pick up these helpful tips on advanced profiling
    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer. Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding”errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools beforeor are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available, can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan. The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM.Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device. With the industry moving toward more types of chipsets, how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per secondto get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations. Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack. Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C#code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Designto get the most out of Unity’s Entity Component System. Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording. #pick #these #helpful #tips #advanced
    UNITY.COM
    Pick up these helpful tips on advanced profiling
    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer (available as a Unity package). Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding” (ANR) errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools before (either daily or just once) or are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available (CPU, GPU, SOC, memory, mobile, desktop, console, etc.), can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling (i.e., targeting 20 ms instead of 33 ms)?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan (if there is, please let us know). The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth (RAM <-> VRAM)?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM (or system cache).Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device(s). With the industry moving toward more types of chipsets (Apple M1, Arm, x86 by Intel, AMD, etc.), how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per second (fps) to get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations (shown as magenta in the Unity CPU Profiler Timeline view). Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack (DOTS). Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C# (HPC#) code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Design (DOD) to get the most out of Unity’s Entity Component System (ECS). Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Try the new UI Toolkit sample – now available on the Asset Store

    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheetfiles and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu.While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets: TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework.We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifiernaming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download
    #try #new #toolkit #sample #now
    Try the new UI Toolkit sample – now available on the Asset Store
    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheetfiles and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu.While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets: TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework.We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifiernaming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download #try #new #toolkit #sample #now
    UNITY.COM
    Try the new UI Toolkit sample – now available on the Asset Store
    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheet (USS) files and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu (they stay active while switching between screens).While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets (TSS): TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework (available in the Universal Render Pipeline).We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifier (BEM) naming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download
    0 Комментарии 0 Поделились 0 предпросмотр
  • Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures

    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing.
    Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing.
    In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting.
    Illustration comparing traditional DLP 3D printing with manual support removaland the new multicolor DLP process with dissolvable supports. Image via University of Texas at Austin.
    The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory.
    High-resolution multimaterial printing
    The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet, violet, or bluelight, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly.
    The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing.
    Comparison of dissolvable and traditional supports in DLP 3D printing.Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate.Gravimetric analysis showing selective mass loss.Mechanical properties of support and structural materials.Manual support removal steps.Surface roughness comparison across methods.High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin.
    Towards scalable automation
    This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products.
    The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution.
    Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin.
    Dissolvable materials as post-processing solutions
    Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling, materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows.
    Read the full article in ACS Publications.
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey.
    Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin.
    #multicolor #dlp #printing #breakthrough #enables
    Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures
    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing. Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing. In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting. Illustration comparing traditional DLP 3D printing with manual support removaland the new multicolor DLP process with dissolvable supports. Image via University of Texas at Austin. The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory. High-resolution multimaterial printing The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet, violet, or bluelight, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly. The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing. Comparison of dissolvable and traditional supports in DLP 3D printing.Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate.Gravimetric analysis showing selective mass loss.Mechanical properties of support and structural materials.Manual support removal steps.Surface roughness comparison across methods.High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin. Towards scalable automation This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products. The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution. Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin. Dissolvable materials as post-processing solutions Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling, materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows. Read the full article in ACS Publications. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin. #multicolor #dlp #printing #breakthrough #enables
    3DPRINTINGINDUSTRY.COM
    Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures
    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing (DLP) 3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing. Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing. In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting. Illustration comparing traditional DLP 3D printing with manual support removal (A) and the new multicolor DLP process with dissolvable supports (B). Image via University of Texas at Austin. The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory. High-resolution multimaterial printing The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet (365 nm), violet (405 nm), or blue (460 nm) light, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly. The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing. Comparison of dissolvable and traditional supports in DLP 3D printing. (A) Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate. (B) Gravimetric analysis showing selective mass loss. (C) Mechanical properties of support and structural materials. (D) Manual support removal steps. (E) Surface roughness comparison across methods. (F) High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin. Towards scalable automation This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products. The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution. Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin. Dissolvable materials as post-processing solutions Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling (FDM), materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows. Read the full article in ACS Publications. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Desktop edition of sculpting app Nomad enters free beta

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta.

    Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition.
    A rounded set of digital sculpting, 3D painting and remeshing features

    First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking.
    A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details.
    Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution.
    Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context.
    Both sculpting and painting are layer-based, making it possible to work non-destructively.
    Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format.
    New desktop edition still early in development, but evolving fast

    Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly.
    Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”.
    The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition.
    You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server.
    Price, release date and system requirements

    The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs about Nomad on the product website
    Follow the progress of the desktop edition on the Discord server
    Download the latest beta builds of the desktop edition of Nomad

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #desktop #edition #sculpting #app #nomad
    Desktop edition of sculpting app Nomad enters free beta
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta. Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition. A rounded set of digital sculpting, 3D painting and remeshing features First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking. A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details. Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution. Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context. Both sculpting and painting are layer-based, making it possible to work non-destructively. Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format. New desktop edition still early in development, but evolving fast Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly. Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”. The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition. You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server. Price, release date and system requirements The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs about Nomad on the product website Follow the progress of the desktop edition on the Discord server Download the latest beta builds of the desktop edition of Nomad Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #desktop #edition #sculpting #app #nomad
    WWW.CGCHANNEL.COM
    Desktop edition of sculpting app Nomad enters free beta
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta. Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition. A rounded set of digital sculpting, 3D painting and remeshing features First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking. A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details. Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution. Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context. Both sculpting and painting are layer-based, making it possible to work non-destructively. Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format. New desktop edition still early in development, but evolving fast Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly. Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”. The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition. You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server. Price, release date and system requirements The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs $19.99. Read more about Nomad on the product website Follow the progress of the desktop edition on the Discord server Download the latest beta builds of the desktop edition of Nomad Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Комментарии 0 Поделились 0 предпросмотр
Расширенные страницы
CGShares https://cgshares.com