Upgrade to Pro

  • Helsing’s AI submarine joins Europe’s growing ocean drone fleet



    Helsing, Europe’s best-funded defence tech startup, has unveiled its latest product — an autonomous mini-submarine for underwater reconnaissance. 
    Dubbed SG-1 Fathom, the sub is the latest addition to Europe’s growing fleet of ocean drones, which aim to better protect the continent’s ships and subsea infrastructure from surveillance, sabotage, and attacks. 
    The 1.95-metre Fathom is designed to slowly patrol the ocean for up to three months at a time.
    The vessel is powered by an AI platform called Lura.
    The system is a large acoustic model (LAM) — like a large language model (LLM) but for sound. 
    Lura is able to classify sounds made by ships and submarines and then pinpoint their locations.
    Helsing said the algorithm can identify sounds at volumes 10 times quieter than competing AI models.
    It also works at 40 times the speed of an equivalent human operator.

    View all speakers
    Helsing said the “mass-producible” submarines can be deployed in hundreds-strong “constellations” to carry out large-scale surveillance. 
    Helsing plans to build the autonomous ocean drones in large numbers.
    Credit: Helsing
    Ocean reconnaissance of this kind has become increasingly urgent since the 2022 Nord Stream pipeline sabotage, which exposed the vulnerability of underwater assets to covert attacks.
    European nations NATO are also stepping up their maritime defences amid growing concerns over Russian aggression. 
    In Ukraine, ocean drones have already become an important tool in its war against Moscow.

    High-tech arsenal 
    The war in Ukraine is increasingly characterised by battles between autonomous systems, mainly unmanned aerial vehicles (UAVs).
    However, the battle between machines is also playing out in the seas.  
    Earlier this month, Ukraine used its Magura naval drone to shoot down two Russian aircraft.
    The Magura, armed with missiles, has been used extensively since 2023 to attack and destroy Russian ships and aircraft. 
    The country is also expanding its fleet of waterborne drones.
    Last week, Ukrainian company Nordex unveiled the Seawolf, an uncrewed surface vessel (USV) for combat, surveillance, and border security applications.    
    British company Kraken is developing a similar uncrewed boat that can engage enemies in combat or deliver cargo and personnel.
    Meanwhile, Denmark is set to trial autonomous sailboats to patrol the Baltic Sea looking for signs of potential threats.   
    The adoption of drones at sea comes amid rising geopolitical tensions, which have prompted European officials to go all-in on defence tech. 
    In March 2025, EU leaders endorsed the “ReArm Europe” plan, aiming to mobilise up to £683bn (€800bn) over the next four years to enhance military capabilities.
    Similarly, the UK government has committed to raising defence spending to 2.5% of GDP and wants to spend at least 10% of its defence budget on “innovative technologies”. 
    Helsing looks to capitalise on this political momentum.
    The company told Bloomberg last month that it has “won over a dozen contracts” with “total order volumes of hundreds of millions of dollars” since its founding in 2021.  
    Helsing, which is valued at €5bn ($5.4bn), is perhaps best known for its combat drones and AI software that acts like the brain for military vehicles such as fighter jets.
    Fathom marks its first entry into ocean-bound technology. 
    Several naval forces have already shown interest in Helsing’s autonomous submarine, the company said.
    It aims to deploy the first fleets of underwater drones within a year. 
    Defence tech is a key theme of the Assembly, the invite-only policy track of TNW Conference.
    The event takes place in Amsterdam on June 19 — a week before the NATO Summit arrives in the city.

    Tickets for TNW Conference are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off.








    Story by



    Siôn Geschwindt





    Siôn is a freelance science and technology reporter, specialising in climate and energy.
    From nuclear fusion breakthroughs to electric vehic


    (show all)



    Siôn is a freelance science and technology reporter, specialising in climate and energy.
    From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test.
    He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa.
    When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction.
    You can contact him at: sion.geschwindt [at] protonmail [dot] com





    Get the TNW newsletter
    Get the most important tech news in your inbox each week.


    Also tagged with


    Source: https://thenextweb.com/news/helsings-ai-submarine-lura-europe-ocean-drone-defence
    #helsings #submarine #joins #europes #growing #ocean #drone #fleet
    Helsing’s AI submarine joins Europe’s growing ocean drone fleet
    Helsing, Europe’s best-funded defence tech startup, has unveiled its latest product — an autonomous mini-submarine for underwater reconnaissance.  Dubbed SG-1 Fathom, the sub is the latest addition to Europe’s growing fleet of ocean drones, which aim to better protect the continent’s ships and subsea infrastructure from surveillance, sabotage, and attacks.  The 1.95-metre Fathom is designed to slowly patrol the ocean for up to three months at a time. The vessel is powered by an AI platform called Lura. The system is a large acoustic model (LAM) — like a large language model (LLM) but for sound.  Lura is able to classify sounds made by ships and submarines and then pinpoint their locations. Helsing said the algorithm can identify sounds at volumes 10 times quieter than competing AI models. It also works at 40 times the speed of an equivalent human operator. View all speakers Helsing said the “mass-producible” submarines can be deployed in hundreds-strong “constellations” to carry out large-scale surveillance.  Helsing plans to build the autonomous ocean drones in large numbers. Credit: Helsing Ocean reconnaissance of this kind has become increasingly urgent since the 2022 Nord Stream pipeline sabotage, which exposed the vulnerability of underwater assets to covert attacks. European nations NATO are also stepping up their maritime defences amid growing concerns over Russian aggression.  In Ukraine, ocean drones have already become an important tool in its war against Moscow. High-tech arsenal  The war in Ukraine is increasingly characterised by battles between autonomous systems, mainly unmanned aerial vehicles (UAVs). However, the battle between machines is also playing out in the seas.   Earlier this month, Ukraine used its Magura naval drone to shoot down two Russian aircraft. The Magura, armed with missiles, has been used extensively since 2023 to attack and destroy Russian ships and aircraft.  The country is also expanding its fleet of waterborne drones. Last week, Ukrainian company Nordex unveiled the Seawolf, an uncrewed surface vessel (USV) for combat, surveillance, and border security applications.     British company Kraken is developing a similar uncrewed boat that can engage enemies in combat or deliver cargo and personnel. Meanwhile, Denmark is set to trial autonomous sailboats to patrol the Baltic Sea looking for signs of potential threats.    The adoption of drones at sea comes amid rising geopolitical tensions, which have prompted European officials to go all-in on defence tech.  In March 2025, EU leaders endorsed the “ReArm Europe” plan, aiming to mobilise up to £683bn (€800bn) over the next four years to enhance military capabilities. Similarly, the UK government has committed to raising defence spending to 2.5% of GDP and wants to spend at least 10% of its defence budget on “innovative technologies”.  Helsing looks to capitalise on this political momentum. The company told Bloomberg last month that it has “won over a dozen contracts” with “total order volumes of hundreds of millions of dollars” since its founding in 2021.   Helsing, which is valued at €5bn ($5.4bn), is perhaps best known for its combat drones and AI software that acts like the brain for military vehicles such as fighter jets. Fathom marks its first entry into ocean-bound technology.  Several naval forces have already shown interest in Helsing’s autonomous submarine, the company said. It aims to deploy the first fleets of underwater drones within a year.  Defence tech is a key theme of the Assembly, the invite-only policy track of TNW Conference. The event takes place in Amsterdam on June 19 — a week before the NATO Summit arrives in the city. Tickets for TNW Conference are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with Source: https://thenextweb.com/news/helsings-ai-submarine-lura-europe-ocean-drone-defence #helsings #submarine #joins #europes #growing #ocean #drone #fleet
    THENEXTWEB.COM
    Helsing’s AI submarine joins Europe’s growing ocean drone fleet
    Helsing, Europe’s best-funded defence tech startup, has unveiled its latest product — an autonomous mini-submarine for underwater reconnaissance.  Dubbed SG-1 Fathom, the sub is the latest addition to Europe’s growing fleet of ocean drones, which aim to better protect the continent’s ships and subsea infrastructure from surveillance, sabotage, and attacks.  The 1.95-metre Fathom is designed to slowly patrol the ocean for up to three months at a time. The vessel is powered by an AI platform called Lura. The system is a large acoustic model (LAM) — like a large language model (LLM) but for sound.  Lura is able to classify sounds made by ships and submarines and then pinpoint their locations. Helsing said the algorithm can identify sounds at volumes 10 times quieter than competing AI models. It also works at 40 times the speed of an equivalent human operator. View all speakers Helsing said the “mass-producible” submarines can be deployed in hundreds-strong “constellations” to carry out large-scale surveillance.  Helsing plans to build the autonomous ocean drones in large numbers. Credit: Helsing Ocean reconnaissance of this kind has become increasingly urgent since the 2022 Nord Stream pipeline sabotage, which exposed the vulnerability of underwater assets to covert attacks. European nations NATO are also stepping up their maritime defences amid growing concerns over Russian aggression.  In Ukraine, ocean drones have already become an important tool in its war against Moscow. High-tech arsenal  The war in Ukraine is increasingly characterised by battles between autonomous systems, mainly unmanned aerial vehicles (UAVs). However, the battle between machines is also playing out in the seas.   Earlier this month, Ukraine used its Magura naval drone to shoot down two Russian aircraft. The Magura, armed with missiles, has been used extensively since 2023 to attack and destroy Russian ships and aircraft.  The country is also expanding its fleet of waterborne drones. Last week, Ukrainian company Nordex unveiled the Seawolf, an uncrewed surface vessel (USV) for combat, surveillance, and border security applications.     British company Kraken is developing a similar uncrewed boat that can engage enemies in combat or deliver cargo and personnel. Meanwhile, Denmark is set to trial autonomous sailboats to patrol the Baltic Sea looking for signs of potential threats.    The adoption of drones at sea comes amid rising geopolitical tensions, which have prompted European officials to go all-in on defence tech.  In March 2025, EU leaders endorsed the “ReArm Europe” plan, aiming to mobilise up to £683bn (€800bn) over the next four years to enhance military capabilities. Similarly, the UK government has committed to raising defence spending to 2.5% of GDP and wants to spend at least 10% of its defence budget on “innovative technologies”.  Helsing looks to capitalise on this political momentum. The company told Bloomberg last month that it has “won over a dozen contracts” with “total order volumes of hundreds of millions of dollars” since its founding in 2021.   Helsing, which is valued at €5bn ($5.4bn), is perhaps best known for its combat drones and AI software that acts like the brain for military vehicles such as fighter jets. Fathom marks its first entry into ocean-bound technology.  Several naval forces have already shown interest in Helsing’s autonomous submarine, the company said. It aims to deploy the first fleets of underwater drones within a year.  Defence tech is a key theme of the Assembly, the invite-only policy track of TNW Conference. The event takes place in Amsterdam on June 19 — a week before the NATO Summit arrives in the city. Tickets for TNW Conference are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    ·81 Ansichten
  • Accessing texture data efficiently

    Learn about the benefits and trade-offs of different ways to access the underlying texture pixel data in your Unity project.Pixel data describes the color of individual pixels in a texture.
    Unity provides methods that enable you to read from or write to pixel data with C# scripts.You might use these methods to duplicate or update a texture (for example, adding a detail to a player’s profile picture), or use the texture’s data in a particular way, like reading a texture that represents a world map to determine where to place an object.There are several ways of writing code that reads from or writes to pixel data.
    The one you choose depends on what you plan to do with the data and the performance needs of your project.This blog and the accompanying sample project are intended to help you navigate the available API and common performance pitfalls.
    An understanding of both will help you write a performant solution or address performance bottlenecks as they appear.For most types of textures, Unity stores two copies of the pixel data: one in GPU memory, which is required for rendering, and the other in CPU memory.
    This copy is optional and allows you to read from, write to, and manipulate pixel data on the CPU.
    A texture with a copy of its pixel data stored in CPU memory is called a readable texture.
    One detail to note is that RenderTexture exists only in GPU memory.The memory available to the CPU differs from that of the GPU on most hardware.
    Some devices have a form of partially shared memory, but for this blog we will assume the classic PC configuration where the CPU only has direct access to the RAM plugged into the motherboard and the GPU relies on its own video RAM (VRAM).
    Any data transferred between these different environments has to pass through the PCI bus, which is slower than transferring data within the same type of memory.
    Due to these costs, you should try to limit the amount of data transferred each frame.Sampling textures in shaders is the most common GPU pixel data operation.
    To alter this data, you can copy between textures or render into a texture using a shader.
    All these operations can be performed quickly by the GPU.In some cases, it may be preferable to manipulate your texture data on the CPU, which offers more flexibility in how data is accessed.
    CPU pixel data operations act only on the CPU copy of the data, so require readable textures.
    If you want to sample the updated pixel data in a shader, you must first copy it from the CPU to the GPU by calling Apply.
    Depending on the texture involved and the complexity of the operations, it may be faster and easier to stick to CPU operations (for example, when copying several 2D textures into a Texture2DArray asset).The Unity API provides several methods to access or process texture data.
    Some operations act on both the GPU and CPU copy if both are present.
    As a result, the performance of these methods varies depending on whether the textures are readable.
    Different methods can be used to achieve the same results, but each method has its own performance and ease-of-use characteristics.Answer the following questions to determine the optimal solution:Can the GPU perform your calculations faster than the CPU?What level of pressure is the process putting on the texture caches? (For example, sampling many high-resolution textures without using mipmaps is likely to slow down the GPU.)Does the process require a random write texture, or can it output to a color or depth attachment? (Writing to random pixels on a texture requires frequent cache flushes that slow down the process.)Is my project already GPU bottlenecked? Even if the GPU is able to execute a process faster than the CPU, can the GPU afford to take on more work without exceeding its frame time budget?If both the GPU and the CPU main thread are near their frame time limit, then perhaps the slow part of a process could be performed by CPU worker threads.How much data needs to be uploaded to or downloaded from the GPU to calculate or process the results?Could a shader or C# job pack the data into a smaller format to reduce the bandwidth required?Could a RenderTexture be downsampled into a smaller resolution version that is downloaded instead?Can the process be performed in chunks? (If a lot of data needs to be processed at once, there’s a risk of the GPU not having enough memory for it.)How quickly are the results required? Can calculations or data transfers be performed asynchronously and handled later? (If too much work is done in a single frame, there is a risk that the GPU won’t have enough time to render the actual graphics for each frame.)By default, texture assets that you import into your project are nonreadable, while textures created from a script are readable.Readable textures use twice as much memory as nonreadable textures because they need to have a copy of their pixel data in CPU RAM.
    You should only make a texture readable when you need to, and make them nonreadable when you are done working with the data on the CPU.To see if a texture asset in your project is readable and make edits, use the Read/Write Enabled option in Texture Import Settings, or the TextureImporter.isReadable API.To make a texture nonreadable, call its Apply method with the makeNoLongerReadable parameter set to “true” (for example, Texture2D.Apply or Cubemap.Apply).
    A nonreadable texture can’t be made readable again.All textures are readable to the Editor in Edit and Play modes.
    Calling Apply to make the texture nonreadable will update the value of isReadable, preventing you from accessing the CPU data.
    However, some Unity processes will function as if the texture is readable because they see that the internal CPU data is valid.Performance differs greatly across the various ways of accessing texture data, especially on the CPU (although less so at lower resolutions).
    The Unity Texture Access API examples repository on GitHub contains a number of examples showing performance differences between various APIs that allow access to, or manipulation of, texture data.
    The UI only shows the main thread CPU timings.
    In some cases, DOTS features like Burst and the job system are used to maximize performance.Here are the examples included in the GitHub repository:SimpleCopy: Copying all pixels from one texture to anotherPlasmaTexture: A plasma texture updated on the CPU per frameTransferGPUTexture: Transferring (copying to a different size or format) all pixels on the GPU from a texture to a RenderTextureListed below are performance measurements taken from the examples on GitHub.
    These numbers are used to support the recommendations that follow.
    The measurements are from a player build on a system with a 3.7 GHz 8-core Xeon® W-2145 CPU and an RTX 2080.These are the median CPU times for SimpleCopy.UpdateTestCase with a texture size of 2,048.Note that the Graphics methods complete nearly instantly on the main thread because they simply push work onto the RenderThread, which is later executed by the GPU.
    Their results will be ready when the next frame is being rendered.Results1,326 ms – foreach(mip) for(x in width) for(y in height) SetPixel(x, y, GetPixel(x, y, mip), mip)32.14 ms – foreach(mip) SetPixels(source.GetPixels(mip), mip)6.96 ms – foreach(mip) SetPixels32(source.GetPixels32(mip), mip)6.74 ms – LoadRawTextureData(source.GetRawTextureData())3.54 ms – Graphics.CopyTexture(readableSource, readableTarget)2.87 ms – foreach(mip) SetPixelData(mip, GetPixelData(mip))2.87 ms – LoadRawTextureData(source.GetRawTextureData())0.00 ms – Graphics.ConvertTexture(source, target)0.00 ms – Graphics.CopyTexture(nonReadableSource, target)These are the median CPU times for PlasmaTexture.UpdateTestCase with a texture size of 512.You’ll see that SetPixels32 is unexpectedly slower than SetPixels.
    This is due to having to take the float-based Color result from the plasma pixel calculation and convert it to the byte-based Color32 struct.
    SetPixels32NoConversion skips this conversion and just assigns a default value to the Color32 output array, resulting in better performance than SetPixels.
    In order to beat the performance of SetPixels and the underlying color conversion performed by Unity, it is necessary to rework the pixel calculation method itself to directly output a Color32 value.
    A simple implementation using SetPixelData is almost guaranteed to give better results than careful SetPixels and SetPixels32 approaches.Results126.95 ms – SetPixel113.16 ms – SetPixels3288.96 ms – SetPixels86.30 ms – SetPixels32NoConversion16.91 ms – SetPixelDataBurst4.27 ms – SetPixelDataBurstParallelThese are the Editor GPU times for TransferGPUTexture.UpdateTestCase with a texture size of 8,196:Blit – 1.584 msCopyTexture – 0.882 msYou can access pixel data in various ways.
    However, not all methods support every format, texture type, or use case, and some take longer to execute than others.
    This section goes over recommended methods, and the following section covers those to use with caution.CopyTexture is the fastest way to transfer GPU data from one texture into another.
    It does not perform any format conversion.
    You can partially copy data by specifying a source and target position, in addition to the width and height of the region.
    If both textures are readable, the copy operation will also be performed on the CPU data, bringing the total cost of this method closer to that of a CPU-only copy using SetPixelData with the result of GetPixelData from a source texture.Blit is a fast and powerful method of transferring GPU data into a RenderTexture using a shader.
    In practice, this has to set up the graphics pipeline API state to render to the target RenderTexture.
    It comes with a small resolution-independent setup cost compared to CopyTexture.
    The default Blit shader used by the method takes an input texture and renders it into the target RenderTexture.
    By providing a custom material or shader, you can define complex texture-to-texture rendering processes.GetPixelData and SetPixelData (along with GetRawTextureData) are the fastest methods to use when only touching CPU data.
    Both methods require you to provide a struct type as a template parameter used to reinterpret the data.
    The methods themselves only need this struct to derive the correct size, so you can just use byte if you don’t want to define a custom struct to represent the texture’s format.When accessing individual pixels, it’s a good idea to define a custom struct with some utility methods for ease of use.
    For example, an R5G5B5A1 format struct could be made up out of a ushort data member and a few get/set methods to access the individual channels as bytes.The above code is an example from an implementation of an object representing a pixel in the R5G5B5A5A1 format; the corresponding property setters are omitted for brevity.SetPixelData can be used to copy a full mip level of data into the target texture.
    GetPixelData will return a NativeArray that actually points to one mip level of Unity’s internal CPU texture data.
    This allows you to directly read/write that data without the need for any copy operations.
    The catch is that the NativeArray returned by GetPixelData is only guaranteed to be valid until the user code calling GetPixelData returns control to Unity, such as when MonoBehaviour.Update returns.
    Instead of storing the result of GetPixelData between frames, you have to get the correct NativeArray from GetPixelData for every frame you want to access this data from.The Apply method returns after the CPU data has been uploaded to the GPU.
    The makeNoLongerReadable parameter should be set to “true” where possible to free up the memory of the CPU data after the upload.The RequestIntoNativeArray and RequestIntoNativeSlice methods asynchronously download GPU data from the specified Texture into (a slice of) a NativeArray provided by the user.Calling the methods will return a request handle that can indicate if the requested data is done downloading.
    Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support.
    The AsyncGPUReadback class also has a Request method, which allocates a NativeArray for you.
    If you need to repeat this operation, you will get better performance if you allocate a NativeArray that you reuse instead.There are a number of methods that should be used with caution due to potentially significant performance impacts.
    Let’s take a look at them in more detail.These methods perform pixel format conversions of varying complexity.
    The Pixels32 variants are the most performant of the bunch, but even they can still perform format conversions if the underlying format of the texture doesn’t perfectly match the Color32 struct.
    When using the following methods, it’s best to keep in mind that their performance impact significantly increases by varying degrees as the number of pixels grows:GetPixelGetPixelBilinearSetPixelGetPixelsSetPixelsGetPixels32SetPixels32GetRawTextureData and LoadRawTextureData are Texture2D-only methods that work with arrays containing the raw pixel data of all mip levels, one after another.
    The layout goes from largest to smallest mip, with each mip being “height” amount of “width” pixel values.
    These functions are quick to give CPU data access.
    GetRawTextureData does have a “gotcha” where the non-templated variant returns a copy of the data.
    This is a bit slower, and does not allow direct manipulation of the underlying buffer managed by Unity.
    GetPixelData does not have this quirk and can only return a NativeArray pointing to the underlying buffer that remains valid until user code returns control to Unity.ConvertTexture is a way to transfer the GPU data from one texture to another, where the source and destination textures don’t have the same size or format.
    This conversion process is as efficient as it gets under the circumstances, but it’s not cheap.
    This is the internal process:Allocate a temporary RenderTexture matching the destination texture.Perform a Blit from the source texture to the temporary RenderTexture.Copy the Blit result from the temporary RenderTexture to the destination texture.Answer the following questions to help determine if this method is suited to your use case:Do I need to perform this conversion?Can I make sure the source texture is created in the desired size/format for the target platform at import time?Can I change my processes to use the same formats, allowing the result of one process to be directly used as an input for another process?Can I create and use a RenderTexture as the destination instead? Doing so would reduce the conversion process to a single Blit to the destination RenderTexture.The ReadPixels method synchronously downloads GPU data from the active RenderTexture (RenderTexture.active) into a Texture2D’s CPU data.
    This enables you to store or process the output from a rendering operation.
    Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support.Downloading data back from the GPU is a slow process.
    Before it can begin, ReadPixels has to wait for the GPU to complete all preceding work.
    It’s best to avoid this method as it will not return until the requested data is available, which will slow down performance.
    Usability is also a concern because you need GPU data to be in a RenderTexture, which has to be configured as the currently active one.
    Both usability and performance are better when using the AsyncGPUReadback methods discussed earlier.The ImageConversion class has methods to convert between Texture2D and several image file formats.
    LoadImage is able to load JPG, PNG, or EXR (since 2023.1) data into a Texture2D and upload this to the GPU for you.
    The loaded pixel data can be compressed on the fly depending on Texture2D’s original format.
    Other methods can convert a Texture2D or pixel data array to an array of JPG, PNG, TGA, or EXR data.These methods are not particularly fast, but can be useful if your project needs to pass pixel data around through common image file formats.
    Typical use cases include loading a user’s avatar from disk and sharing it with other players over a network.There are many resources available to learn more about graphics optimization, related topics, and best practices in Unity.
    The graphics performance and profiling section of the documentation is a good starting point.You can also check out several technical e-books for advanced users, including Ultimate guide to profiling Unity games, Optimize your mobile game performance, and Optimize your console and PC game performance.You’ll find many more advanced best practices on the Unity how-to hub.Here’s a summary of the key points to remember:When manipulating textures, the first step is to assess which operations can be performed on the GPU for optimal performance.
    The existing CPU/GPU workload and size of the input/output data are key factors to consider.Using low level functions like GetRawTextureData to implement a specific conversion path where necessary can offer improved performance over the more convenient methods that perform (often redundant) copies and conversions.More complex operations, such as large readbacks and pixel calculations, are only viable on the CPU when performed asynchronously or in parallel.
    The combination of Burst and the job system allows C# to perform certain operations that would otherwise only be performant on a GPU.Profile frequently: There are many pitfalls you can encounter during development, from unexpected and unnecessary conversions to stalls from waiting on another process.
    Some performance issues will only start surfacing as the game scales up and certain parts of your code see heavier usage.
    The example project demonstrates how seemingly small increases in texture resolution can cause certain APIs to become a performance issue.Share your feedback on texture data with us in the Scripting or General Graphics forums.
    Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.
    Source: https://unity.com/blog/engine-platform/accessing-texture-data-efficiently
    #accessing #texture #data #efficiently
    Accessing texture data efficiently
    Learn about the benefits and trade-offs of different ways to access the underlying texture pixel data in your Unity project.Pixel data describes the color of individual pixels in a texture. Unity provides methods that enable you to read from or write to pixel data with C# scripts.You might use these methods to duplicate or update a texture (for example, adding a detail to a player’s profile picture), or use the texture’s data in a particular way, like reading a texture that represents a world map to determine where to place an object.There are several ways of writing code that reads from or writes to pixel data. The one you choose depends on what you plan to do with the data and the performance needs of your project.This blog and the accompanying sample project are intended to help you navigate the available API and common performance pitfalls. An understanding of both will help you write a performant solution or address performance bottlenecks as they appear.For most types of textures, Unity stores two copies of the pixel data: one in GPU memory, which is required for rendering, and the other in CPU memory. This copy is optional and allows you to read from, write to, and manipulate pixel data on the CPU. A texture with a copy of its pixel data stored in CPU memory is called a readable texture. One detail to note is that RenderTexture exists only in GPU memory.The memory available to the CPU differs from that of the GPU on most hardware. Some devices have a form of partially shared memory, but for this blog we will assume the classic PC configuration where the CPU only has direct access to the RAM plugged into the motherboard and the GPU relies on its own video RAM (VRAM). Any data transferred between these different environments has to pass through the PCI bus, which is slower than transferring data within the same type of memory. Due to these costs, you should try to limit the amount of data transferred each frame.Sampling textures in shaders is the most common GPU pixel data operation. To alter this data, you can copy between textures or render into a texture using a shader. All these operations can be performed quickly by the GPU.In some cases, it may be preferable to manipulate your texture data on the CPU, which offers more flexibility in how data is accessed. CPU pixel data operations act only on the CPU copy of the data, so require readable textures. If you want to sample the updated pixel data in a shader, you must first copy it from the CPU to the GPU by calling Apply. Depending on the texture involved and the complexity of the operations, it may be faster and easier to stick to CPU operations (for example, when copying several 2D textures into a Texture2DArray asset).The Unity API provides several methods to access or process texture data. Some operations act on both the GPU and CPU copy if both are present. As a result, the performance of these methods varies depending on whether the textures are readable. Different methods can be used to achieve the same results, but each method has its own performance and ease-of-use characteristics.Answer the following questions to determine the optimal solution:Can the GPU perform your calculations faster than the CPU?What level of pressure is the process putting on the texture caches? (For example, sampling many high-resolution textures without using mipmaps is likely to slow down the GPU.)Does the process require a random write texture, or can it output to a color or depth attachment? (Writing to random pixels on a texture requires frequent cache flushes that slow down the process.)Is my project already GPU bottlenecked? Even if the GPU is able to execute a process faster than the CPU, can the GPU afford to take on more work without exceeding its frame time budget?If both the GPU and the CPU main thread are near their frame time limit, then perhaps the slow part of a process could be performed by CPU worker threads.How much data needs to be uploaded to or downloaded from the GPU to calculate or process the results?Could a shader or C# job pack the data into a smaller format to reduce the bandwidth required?Could a RenderTexture be downsampled into a smaller resolution version that is downloaded instead?Can the process be performed in chunks? (If a lot of data needs to be processed at once, there’s a risk of the GPU not having enough memory for it.)How quickly are the results required? Can calculations or data transfers be performed asynchronously and handled later? (If too much work is done in a single frame, there is a risk that the GPU won’t have enough time to render the actual graphics for each frame.)By default, texture assets that you import into your project are nonreadable, while textures created from a script are readable.Readable textures use twice as much memory as nonreadable textures because they need to have a copy of their pixel data in CPU RAM. You should only make a texture readable when you need to, and make them nonreadable when you are done working with the data on the CPU.To see if a texture asset in your project is readable and make edits, use the Read/Write Enabled option in Texture Import Settings, or the TextureImporter.isReadable API.To make a texture nonreadable, call its Apply method with the makeNoLongerReadable parameter set to “true” (for example, Texture2D.Apply or Cubemap.Apply). A nonreadable texture can’t be made readable again.All textures are readable to the Editor in Edit and Play modes. Calling Apply to make the texture nonreadable will update the value of isReadable, preventing you from accessing the CPU data. However, some Unity processes will function as if the texture is readable because they see that the internal CPU data is valid.Performance differs greatly across the various ways of accessing texture data, especially on the CPU (although less so at lower resolutions). The Unity Texture Access API examples repository on GitHub contains a number of examples showing performance differences between various APIs that allow access to, or manipulation of, texture data. The UI only shows the main thread CPU timings. In some cases, DOTS features like Burst and the job system are used to maximize performance.Here are the examples included in the GitHub repository:SimpleCopy: Copying all pixels from one texture to anotherPlasmaTexture: A plasma texture updated on the CPU per frameTransferGPUTexture: Transferring (copying to a different size or format) all pixels on the GPU from a texture to a RenderTextureListed below are performance measurements taken from the examples on GitHub. These numbers are used to support the recommendations that follow. The measurements are from a player build on a system with a 3.7 GHz 8-core Xeon® W-2145 CPU and an RTX 2080.These are the median CPU times for SimpleCopy.UpdateTestCase with a texture size of 2,048.Note that the Graphics methods complete nearly instantly on the main thread because they simply push work onto the RenderThread, which is later executed by the GPU. Their results will be ready when the next frame is being rendered.Results1,326 ms – foreach(mip) for(x in width) for(y in height) SetPixel(x, y, GetPixel(x, y, mip), mip)32.14 ms – foreach(mip) SetPixels(source.GetPixels(mip), mip)6.96 ms – foreach(mip) SetPixels32(source.GetPixels32(mip), mip)6.74 ms – LoadRawTextureData(source.GetRawTextureData())3.54 ms – Graphics.CopyTexture(readableSource, readableTarget)2.87 ms – foreach(mip) SetPixelData(mip, GetPixelData(mip))2.87 ms – LoadRawTextureData(source.GetRawTextureData())0.00 ms – Graphics.ConvertTexture(source, target)0.00 ms – Graphics.CopyTexture(nonReadableSource, target)These are the median CPU times for PlasmaTexture.UpdateTestCase with a texture size of 512.You’ll see that SetPixels32 is unexpectedly slower than SetPixels. This is due to having to take the float-based Color result from the plasma pixel calculation and convert it to the byte-based Color32 struct. SetPixels32NoConversion skips this conversion and just assigns a default value to the Color32 output array, resulting in better performance than SetPixels. In order to beat the performance of SetPixels and the underlying color conversion performed by Unity, it is necessary to rework the pixel calculation method itself to directly output a Color32 value. A simple implementation using SetPixelData is almost guaranteed to give better results than careful SetPixels and SetPixels32 approaches.Results126.95 ms – SetPixel113.16 ms – SetPixels3288.96 ms – SetPixels86.30 ms – SetPixels32NoConversion16.91 ms – SetPixelDataBurst4.27 ms – SetPixelDataBurstParallelThese are the Editor GPU times for TransferGPUTexture.UpdateTestCase with a texture size of 8,196:Blit – 1.584 msCopyTexture – 0.882 msYou can access pixel data in various ways. However, not all methods support every format, texture type, or use case, and some take longer to execute than others. This section goes over recommended methods, and the following section covers those to use with caution.CopyTexture is the fastest way to transfer GPU data from one texture into another. It does not perform any format conversion. You can partially copy data by specifying a source and target position, in addition to the width and height of the region. If both textures are readable, the copy operation will also be performed on the CPU data, bringing the total cost of this method closer to that of a CPU-only copy using SetPixelData with the result of GetPixelData from a source texture.Blit is a fast and powerful method of transferring GPU data into a RenderTexture using a shader. In practice, this has to set up the graphics pipeline API state to render to the target RenderTexture. It comes with a small resolution-independent setup cost compared to CopyTexture. The default Blit shader used by the method takes an input texture and renders it into the target RenderTexture. By providing a custom material or shader, you can define complex texture-to-texture rendering processes.GetPixelData and SetPixelData (along with GetRawTextureData) are the fastest methods to use when only touching CPU data. Both methods require you to provide a struct type as a template parameter used to reinterpret the data. The methods themselves only need this struct to derive the correct size, so you can just use byte if you don’t want to define a custom struct to represent the texture’s format.When accessing individual pixels, it’s a good idea to define a custom struct with some utility methods for ease of use. For example, an R5G5B5A1 format struct could be made up out of a ushort data member and a few get/set methods to access the individual channels as bytes.The above code is an example from an implementation of an object representing a pixel in the R5G5B5A5A1 format; the corresponding property setters are omitted for brevity.SetPixelData can be used to copy a full mip level of data into the target texture. GetPixelData will return a NativeArray that actually points to one mip level of Unity’s internal CPU texture data. This allows you to directly read/write that data without the need for any copy operations. The catch is that the NativeArray returned by GetPixelData is only guaranteed to be valid until the user code calling GetPixelData returns control to Unity, such as when MonoBehaviour.Update returns. Instead of storing the result of GetPixelData between frames, you have to get the correct NativeArray from GetPixelData for every frame you want to access this data from.The Apply method returns after the CPU data has been uploaded to the GPU. The makeNoLongerReadable parameter should be set to “true” where possible to free up the memory of the CPU data after the upload.The RequestIntoNativeArray and RequestIntoNativeSlice methods asynchronously download GPU data from the specified Texture into (a slice of) a NativeArray provided by the user.Calling the methods will return a request handle that can indicate if the requested data is done downloading. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support. The AsyncGPUReadback class also has a Request method, which allocates a NativeArray for you. If you need to repeat this operation, you will get better performance if you allocate a NativeArray that you reuse instead.There are a number of methods that should be used with caution due to potentially significant performance impacts. Let’s take a look at them in more detail.These methods perform pixel format conversions of varying complexity. The Pixels32 variants are the most performant of the bunch, but even they can still perform format conversions if the underlying format of the texture doesn’t perfectly match the Color32 struct. When using the following methods, it’s best to keep in mind that their performance impact significantly increases by varying degrees as the number of pixels grows:GetPixelGetPixelBilinearSetPixelGetPixelsSetPixelsGetPixels32SetPixels32GetRawTextureData and LoadRawTextureData are Texture2D-only methods that work with arrays containing the raw pixel data of all mip levels, one after another. The layout goes from largest to smallest mip, with each mip being “height” amount of “width” pixel values. These functions are quick to give CPU data access. GetRawTextureData does have a “gotcha” where the non-templated variant returns a copy of the data. This is a bit slower, and does not allow direct manipulation of the underlying buffer managed by Unity. GetPixelData does not have this quirk and can only return a NativeArray pointing to the underlying buffer that remains valid until user code returns control to Unity.ConvertTexture is a way to transfer the GPU data from one texture to another, where the source and destination textures don’t have the same size or format. This conversion process is as efficient as it gets under the circumstances, but it’s not cheap. This is the internal process:Allocate a temporary RenderTexture matching the destination texture.Perform a Blit from the source texture to the temporary RenderTexture.Copy the Blit result from the temporary RenderTexture to the destination texture.Answer the following questions to help determine if this method is suited to your use case:Do I need to perform this conversion?Can I make sure the source texture is created in the desired size/format for the target platform at import time?Can I change my processes to use the same formats, allowing the result of one process to be directly used as an input for another process?Can I create and use a RenderTexture as the destination instead? Doing so would reduce the conversion process to a single Blit to the destination RenderTexture.The ReadPixels method synchronously downloads GPU data from the active RenderTexture (RenderTexture.active) into a Texture2D’s CPU data. This enables you to store or process the output from a rendering operation. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support.Downloading data back from the GPU is a slow process. Before it can begin, ReadPixels has to wait for the GPU to complete all preceding work. It’s best to avoid this method as it will not return until the requested data is available, which will slow down performance. Usability is also a concern because you need GPU data to be in a RenderTexture, which has to be configured as the currently active one. Both usability and performance are better when using the AsyncGPUReadback methods discussed earlier.The ImageConversion class has methods to convert between Texture2D and several image file formats. LoadImage is able to load JPG, PNG, or EXR (since 2023.1) data into a Texture2D and upload this to the GPU for you. The loaded pixel data can be compressed on the fly depending on Texture2D’s original format. Other methods can convert a Texture2D or pixel data array to an array of JPG, PNG, TGA, or EXR data.These methods are not particularly fast, but can be useful if your project needs to pass pixel data around through common image file formats. Typical use cases include loading a user’s avatar from disk and sharing it with other players over a network.There are many resources available to learn more about graphics optimization, related topics, and best practices in Unity. The graphics performance and profiling section of the documentation is a good starting point.You can also check out several technical e-books for advanced users, including Ultimate guide to profiling Unity games, Optimize your mobile game performance, and Optimize your console and PC game performance.You’ll find many more advanced best practices on the Unity how-to hub.Here’s a summary of the key points to remember:When manipulating textures, the first step is to assess which operations can be performed on the GPU for optimal performance. The existing CPU/GPU workload and size of the input/output data are key factors to consider.Using low level functions like GetRawTextureData to implement a specific conversion path where necessary can offer improved performance over the more convenient methods that perform (often redundant) copies and conversions.More complex operations, such as large readbacks and pixel calculations, are only viable on the CPU when performed asynchronously or in parallel. The combination of Burst and the job system allows C# to perform certain operations that would otherwise only be performant on a GPU.Profile frequently: There are many pitfalls you can encounter during development, from unexpected and unnecessary conversions to stalls from waiting on another process. Some performance issues will only start surfacing as the game scales up and certain parts of your code see heavier usage. The example project demonstrates how seemingly small increases in texture resolution can cause certain APIs to become a performance issue.Share your feedback on texture data with us in the Scripting or General Graphics forums. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series. Source: https://unity.com/blog/engine-platform/accessing-texture-data-efficiently #accessing #texture #data #efficiently
    UNITY.COM
    Accessing texture data efficiently
    Learn about the benefits and trade-offs of different ways to access the underlying texture pixel data in your Unity project.Pixel data describes the color of individual pixels in a texture. Unity provides methods that enable you to read from or write to pixel data with C# scripts.You might use these methods to duplicate or update a texture (for example, adding a detail to a player’s profile picture), or use the texture’s data in a particular way, like reading a texture that represents a world map to determine where to place an object.There are several ways of writing code that reads from or writes to pixel data. The one you choose depends on what you plan to do with the data and the performance needs of your project.This blog and the accompanying sample project are intended to help you navigate the available API and common performance pitfalls. An understanding of both will help you write a performant solution or address performance bottlenecks as they appear.For most types of textures, Unity stores two copies of the pixel data: one in GPU memory, which is required for rendering, and the other in CPU memory. This copy is optional and allows you to read from, write to, and manipulate pixel data on the CPU. A texture with a copy of its pixel data stored in CPU memory is called a readable texture. One detail to note is that RenderTexture exists only in GPU memory.The memory available to the CPU differs from that of the GPU on most hardware. Some devices have a form of partially shared memory, but for this blog we will assume the classic PC configuration where the CPU only has direct access to the RAM plugged into the motherboard and the GPU relies on its own video RAM (VRAM). Any data transferred between these different environments has to pass through the PCI bus, which is slower than transferring data within the same type of memory. Due to these costs, you should try to limit the amount of data transferred each frame.Sampling textures in shaders is the most common GPU pixel data operation. To alter this data, you can copy between textures or render into a texture using a shader. All these operations can be performed quickly by the GPU.In some cases, it may be preferable to manipulate your texture data on the CPU, which offers more flexibility in how data is accessed. CPU pixel data operations act only on the CPU copy of the data, so require readable textures. If you want to sample the updated pixel data in a shader, you must first copy it from the CPU to the GPU by calling Apply. Depending on the texture involved and the complexity of the operations, it may be faster and easier to stick to CPU operations (for example, when copying several 2D textures into a Texture2DArray asset).The Unity API provides several methods to access or process texture data. Some operations act on both the GPU and CPU copy if both are present. As a result, the performance of these methods varies depending on whether the textures are readable. Different methods can be used to achieve the same results, but each method has its own performance and ease-of-use characteristics.Answer the following questions to determine the optimal solution:Can the GPU perform your calculations faster than the CPU?What level of pressure is the process putting on the texture caches? (For example, sampling many high-resolution textures without using mipmaps is likely to slow down the GPU.)Does the process require a random write texture, or can it output to a color or depth attachment? (Writing to random pixels on a texture requires frequent cache flushes that slow down the process.)Is my project already GPU bottlenecked? Even if the GPU is able to execute a process faster than the CPU, can the GPU afford to take on more work without exceeding its frame time budget?If both the GPU and the CPU main thread are near their frame time limit, then perhaps the slow part of a process could be performed by CPU worker threads.How much data needs to be uploaded to or downloaded from the GPU to calculate or process the results?Could a shader or C# job pack the data into a smaller format to reduce the bandwidth required?Could a RenderTexture be downsampled into a smaller resolution version that is downloaded instead?Can the process be performed in chunks? (If a lot of data needs to be processed at once, there’s a risk of the GPU not having enough memory for it.)How quickly are the results required? Can calculations or data transfers be performed asynchronously and handled later? (If too much work is done in a single frame, there is a risk that the GPU won’t have enough time to render the actual graphics for each frame.)By default, texture assets that you import into your project are nonreadable, while textures created from a script are readable.Readable textures use twice as much memory as nonreadable textures because they need to have a copy of their pixel data in CPU RAM. You should only make a texture readable when you need to, and make them nonreadable when you are done working with the data on the CPU.To see if a texture asset in your project is readable and make edits, use the Read/Write Enabled option in Texture Import Settings, or the TextureImporter.isReadable API.To make a texture nonreadable, call its Apply method with the makeNoLongerReadable parameter set to “true” (for example, Texture2D.Apply or Cubemap.Apply). A nonreadable texture can’t be made readable again.All textures are readable to the Editor in Edit and Play modes. Calling Apply to make the texture nonreadable will update the value of isReadable, preventing you from accessing the CPU data. However, some Unity processes will function as if the texture is readable because they see that the internal CPU data is valid.Performance differs greatly across the various ways of accessing texture data, especially on the CPU (although less so at lower resolutions). The Unity Texture Access API examples repository on GitHub contains a number of examples showing performance differences between various APIs that allow access to, or manipulation of, texture data. The UI only shows the main thread CPU timings. In some cases, DOTS features like Burst and the job system are used to maximize performance.Here are the examples included in the GitHub repository:SimpleCopy: Copying all pixels from one texture to anotherPlasmaTexture: A plasma texture updated on the CPU per frameTransferGPUTexture: Transferring (copying to a different size or format) all pixels on the GPU from a texture to a RenderTextureListed below are performance measurements taken from the examples on GitHub. These numbers are used to support the recommendations that follow. The measurements are from a player build on a system with a 3.7 GHz 8-core Xeon® W-2145 CPU and an RTX 2080.These are the median CPU times for SimpleCopy.UpdateTestCase with a texture size of 2,048.Note that the Graphics methods complete nearly instantly on the main thread because they simply push work onto the RenderThread, which is later executed by the GPU. Their results will be ready when the next frame is being rendered.Results1,326 ms – foreach(mip) for(x in width) for(y in height) SetPixel(x, y, GetPixel(x, y, mip), mip)32.14 ms – foreach(mip) SetPixels(source.GetPixels(mip), mip)6.96 ms – foreach(mip) SetPixels32(source.GetPixels32(mip), mip)6.74 ms – LoadRawTextureData(source.GetRawTextureData())3.54 ms – Graphics.CopyTexture(readableSource, readableTarget)2.87 ms – foreach(mip) SetPixelData(mip, GetPixelData(mip))2.87 ms – LoadRawTextureData(source.GetRawTextureData())0.00 ms – Graphics.ConvertTexture(source, target)0.00 ms – Graphics.CopyTexture(nonReadableSource, target)These are the median CPU times for PlasmaTexture.UpdateTestCase with a texture size of 512.You’ll see that SetPixels32 is unexpectedly slower than SetPixels. This is due to having to take the float-based Color result from the plasma pixel calculation and convert it to the byte-based Color32 struct. SetPixels32NoConversion skips this conversion and just assigns a default value to the Color32 output array, resulting in better performance than SetPixels. In order to beat the performance of SetPixels and the underlying color conversion performed by Unity, it is necessary to rework the pixel calculation method itself to directly output a Color32 value. A simple implementation using SetPixelData is almost guaranteed to give better results than careful SetPixels and SetPixels32 approaches.Results126.95 ms – SetPixel113.16 ms – SetPixels3288.96 ms – SetPixels86.30 ms – SetPixels32NoConversion16.91 ms – SetPixelDataBurst4.27 ms – SetPixelDataBurstParallelThese are the Editor GPU times for TransferGPUTexture.UpdateTestCase with a texture size of 8,196:Blit – 1.584 msCopyTexture – 0.882 msYou can access pixel data in various ways. However, not all methods support every format, texture type, or use case, and some take longer to execute than others. This section goes over recommended methods, and the following section covers those to use with caution.CopyTexture is the fastest way to transfer GPU data from one texture into another. It does not perform any format conversion. You can partially copy data by specifying a source and target position, in addition to the width and height of the region. If both textures are readable, the copy operation will also be performed on the CPU data, bringing the total cost of this method closer to that of a CPU-only copy using SetPixelData with the result of GetPixelData from a source texture.Blit is a fast and powerful method of transferring GPU data into a RenderTexture using a shader. In practice, this has to set up the graphics pipeline API state to render to the target RenderTexture. It comes with a small resolution-independent setup cost compared to CopyTexture. The default Blit shader used by the method takes an input texture and renders it into the target RenderTexture. By providing a custom material or shader, you can define complex texture-to-texture rendering processes.GetPixelData and SetPixelData (along with GetRawTextureData) are the fastest methods to use when only touching CPU data. Both methods require you to provide a struct type as a template parameter used to reinterpret the data. The methods themselves only need this struct to derive the correct size, so you can just use byte if you don’t want to define a custom struct to represent the texture’s format.When accessing individual pixels, it’s a good idea to define a custom struct with some utility methods for ease of use. For example, an R5G5B5A1 format struct could be made up out of a ushort data member and a few get/set methods to access the individual channels as bytes.The above code is an example from an implementation of an object representing a pixel in the R5G5B5A5A1 format; the corresponding property setters are omitted for brevity.SetPixelData can be used to copy a full mip level of data into the target texture. GetPixelData will return a NativeArray that actually points to one mip level of Unity’s internal CPU texture data. This allows you to directly read/write that data without the need for any copy operations. The catch is that the NativeArray returned by GetPixelData is only guaranteed to be valid until the user code calling GetPixelData returns control to Unity, such as when MonoBehaviour.Update returns. Instead of storing the result of GetPixelData between frames, you have to get the correct NativeArray from GetPixelData for every frame you want to access this data from.The Apply method returns after the CPU data has been uploaded to the GPU. The makeNoLongerReadable parameter should be set to “true” where possible to free up the memory of the CPU data after the upload.The RequestIntoNativeArray and RequestIntoNativeSlice methods asynchronously download GPU data from the specified Texture into (a slice of) a NativeArray provided by the user.Calling the methods will return a request handle that can indicate if the requested data is done downloading. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support. The AsyncGPUReadback class also has a Request method, which allocates a NativeArray for you. If you need to repeat this operation, you will get better performance if you allocate a NativeArray that you reuse instead.There are a number of methods that should be used with caution due to potentially significant performance impacts. Let’s take a look at them in more detail.These methods perform pixel format conversions of varying complexity. The Pixels32 variants are the most performant of the bunch, but even they can still perform format conversions if the underlying format of the texture doesn’t perfectly match the Color32 struct. When using the following methods, it’s best to keep in mind that their performance impact significantly increases by varying degrees as the number of pixels grows:GetPixelGetPixelBilinearSetPixelGetPixelsSetPixelsGetPixels32SetPixels32GetRawTextureData and LoadRawTextureData are Texture2D-only methods that work with arrays containing the raw pixel data of all mip levels, one after another. The layout goes from largest to smallest mip, with each mip being “height” amount of “width” pixel values. These functions are quick to give CPU data access. GetRawTextureData does have a “gotcha” where the non-templated variant returns a copy of the data. This is a bit slower, and does not allow direct manipulation of the underlying buffer managed by Unity. GetPixelData does not have this quirk and can only return a NativeArray pointing to the underlying buffer that remains valid until user code returns control to Unity.ConvertTexture is a way to transfer the GPU data from one texture to another, where the source and destination textures don’t have the same size or format. This conversion process is as efficient as it gets under the circumstances, but it’s not cheap. This is the internal process:Allocate a temporary RenderTexture matching the destination texture.Perform a Blit from the source texture to the temporary RenderTexture.Copy the Blit result from the temporary RenderTexture to the destination texture.Answer the following questions to help determine if this method is suited to your use case:Do I need to perform this conversion?Can I make sure the source texture is created in the desired size/format for the target platform at import time?Can I change my processes to use the same formats, allowing the result of one process to be directly used as an input for another process?Can I create and use a RenderTexture as the destination instead? Doing so would reduce the conversion process to a single Blit to the destination RenderTexture.The ReadPixels method synchronously downloads GPU data from the active RenderTexture (RenderTexture.active) into a Texture2D’s CPU data. This enables you to store or process the output from a rendering operation. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support.Downloading data back from the GPU is a slow process. Before it can begin, ReadPixels has to wait for the GPU to complete all preceding work. It’s best to avoid this method as it will not return until the requested data is available, which will slow down performance. Usability is also a concern because you need GPU data to be in a RenderTexture, which has to be configured as the currently active one. Both usability and performance are better when using the AsyncGPUReadback methods discussed earlier.The ImageConversion class has methods to convert between Texture2D and several image file formats. LoadImage is able to load JPG, PNG, or EXR (since 2023.1) data into a Texture2D and upload this to the GPU for you. The loaded pixel data can be compressed on the fly depending on Texture2D’s original format. Other methods can convert a Texture2D or pixel data array to an array of JPG, PNG, TGA, or EXR data.These methods are not particularly fast, but can be useful if your project needs to pass pixel data around through common image file formats. Typical use cases include loading a user’s avatar from disk and sharing it with other players over a network.There are many resources available to learn more about graphics optimization, related topics, and best practices in Unity. The graphics performance and profiling section of the documentation is a good starting point.You can also check out several technical e-books for advanced users, including Ultimate guide to profiling Unity games, Optimize your mobile game performance, and Optimize your console and PC game performance.You’ll find many more advanced best practices on the Unity how-to hub.Here’s a summary of the key points to remember:When manipulating textures, the first step is to assess which operations can be performed on the GPU for optimal performance. The existing CPU/GPU workload and size of the input/output data are key factors to consider.Using low level functions like GetRawTextureData to implement a specific conversion path where necessary can offer improved performance over the more convenient methods that perform (often redundant) copies and conversions.More complex operations, such as large readbacks and pixel calculations, are only viable on the CPU when performed asynchronously or in parallel. The combination of Burst and the job system allows C# to perform certain operations that would otherwise only be performant on a GPU.Profile frequently: There are many pitfalls you can encounter during development, from unexpected and unnecessary conversions to stalls from waiting on another process. Some performance issues will only start surfacing as the game scales up and certain parts of your code see heavier usage. The example project demonstrates how seemingly small increases in texture resolution can cause certain APIs to become a performance issue.Share your feedback on texture data with us in the Scripting or General Graphics forums. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.
    ·167 Ansichten
  • P&O fundamentals: busting the myths surrounding player ownership

    Play and Own (P&O) gives your users ownership over their digital assets, turning those assets into collectibles and enabling them to get even more value through secondary markets.
    For you, this could mean a new way to unlock revenue streams, acquire the right users, and build a community around your games.Recently, we conducted a survey to find out what gamers think about P&O.
    The answers we got back were clear - a majority of users we surveyed want player ownership and say they are even willing to pay more to have it in their games.
    Yet, there remain some sticky myths surrounding P&O that are holding many developers back from mass adoption.To bust these myths and create a more accurate picture of what player ownership is, below are the most common misconceptions we hear, and the truth for each.Myth: the tech requirements needed to enable P&O are too demanding for most developersBehind this myth is the idea that to integrate player ownership into your titles, you need to have a technical team that’s proficient and experienced in blockchain coding (the tech, in part, at the foundation of P&O).
    In other words, you need to know how to build a decentralized app from scratch.The truth: you can integrate player ownership into your games with no technical blockchain know-howIt may have been true in the past that you needed a highly-skilled technical team to integrate decentralized assets into your games, and support their management.
    But these days, there are solutions - like Astra - that do the heavy lifting for you.
    They can take care of the smart contracts - and handle wallet creation, minting your assets, publishing, and balancing the economy of your game.
    It’s a single, easy-access entry point into P&O without the need for a technical team.Myth: player ownership won’t work on mobileMost users who interact with decentralized apps and player ownership have historically done so through PC - not on mobile.
    So, the thinking goes, the infrastructure hasn’t been built to accommodate mobile users.
    On top of that, due to the decentralized nature of the ecosystem, it's harder to regulate - making it difficult to offer titles through mobile app stores.The truth: player ownership isn’t just mobile-friendly, it can be mobile-firstWhile it may be true that in the past decentralized apps were kept mostly to PCs, that’s no longer the case.
    Thanks to new innovations enabling studios to give users the benefits of decentralization and player ownership on their mobile devices, the P&O experience isn’t only confined to desktops.
    Going one step further, there are solutions that are able to leave cryptocurrencies out of the equation and pass all transactions through the IAP mechanisms of mobile app stores - meaning your game can easily start and scale with mobile audiences.Myth: P&O solutions and tech have bad UX that causes users to churnA major problem for the ubiquity of player ownership in the past was the complexity of its tech.
    Just to enter into the world of player ownership meant that users had to have a decent understanding of blockchain technology, access to communities through Discord, and be able to store assets in a third party wallet.
    And that’s before they even get started playing the games or collecting and selling assets.
    However, things have changed since then.The truth: player ownership can be user-friendly, familiar, and easy to navigatePlayer ownership has had its UX growing pains - but that’s changed.
    As the technology matures, many developers have found new and better ways to offer users entry into player ownership.New solutions have made it easy to integrate player ownership, mint assets, and enable trading directly in your games.
    And these advancements have created a better UX for users: marketplaces, apps, wallets, and platforms are now familiar (they look and feel like apps users already have experience with).
    Users can now get ownership over their assets, collect, and trade them as easily as browsing Instagram or shopping on Amazon.Myth: users don’t understand the benefits of player ownershipA concern we hear from some developers is that despite the clear value in player ownership, they fear that users are intimidated by and mistrustful of the technology.
    But, in reality, users (particularly gamers) have been finding ways to create player ownership for a long time.The truth: many users are already finding ways to trade and collect digital assetsWe know how a lot of players feel about player ownership.
    We asked them.
    But even without those insights, there’s a tremendous amount of proof that many users not only understand player ownership, but are already actively seeking it out and creating it for themselves.
    From Diablo, to Counterstrike, Fortnite, and much more, collectible gaming marketplaces persist and have huge followings.
    It makes sense - gamers are often collectors, and the same drive that compels them to catch every Pokémon translates directly into player ownership in games, too.In this context, player ownership isn’t a leap of faith - it’s the next step.
    P&O provides the same trading, collecting, and community-building that many users want, but makes it even easier to access and trust.

    المصدر: https://unity.com/blog/po-fundamentals-busting-the-myths-surrounding-player-ownership

    #PampampO #fundamentals #busting #the #myths #surrounding #player #ownership
    P&O fundamentals: busting the myths surrounding player ownership
    Play and Own (P&O) gives your users ownership over their digital assets, turning those assets into collectibles and enabling them to get even more value through secondary markets. For you, this could mean a new way to unlock revenue streams, acquire the right users, and build a community around your games.Recently, we conducted a survey to find out what gamers think about P&O. The answers we got back were clear - a majority of users we surveyed want player ownership and say they are even willing to pay more to have it in their games. Yet, there remain some sticky myths surrounding P&O that are holding many developers back from mass adoption.To bust these myths and create a more accurate picture of what player ownership is, below are the most common misconceptions we hear, and the truth for each.Myth: the tech requirements needed to enable P&O are too demanding for most developersBehind this myth is the idea that to integrate player ownership into your titles, you need to have a technical team that’s proficient and experienced in blockchain coding (the tech, in part, at the foundation of P&O). In other words, you need to know how to build a decentralized app from scratch.The truth: you can integrate player ownership into your games with no technical blockchain know-howIt may have been true in the past that you needed a highly-skilled technical team to integrate decentralized assets into your games, and support their management. But these days, there are solutions - like Astra - that do the heavy lifting for you. They can take care of the smart contracts - and handle wallet creation, minting your assets, publishing, and balancing the economy of your game. It’s a single, easy-access entry point into P&O without the need for a technical team.Myth: player ownership won’t work on mobileMost users who interact with decentralized apps and player ownership have historically done so through PC - not on mobile. So, the thinking goes, the infrastructure hasn’t been built to accommodate mobile users. On top of that, due to the decentralized nature of the ecosystem, it's harder to regulate - making it difficult to offer titles through mobile app stores.The truth: player ownership isn’t just mobile-friendly, it can be mobile-firstWhile it may be true that in the past decentralized apps were kept mostly to PCs, that’s no longer the case. Thanks to new innovations enabling studios to give users the benefits of decentralization and player ownership on their mobile devices, the P&O experience isn’t only confined to desktops. Going one step further, there are solutions that are able to leave cryptocurrencies out of the equation and pass all transactions through the IAP mechanisms of mobile app stores - meaning your game can easily start and scale with mobile audiences.Myth: P&O solutions and tech have bad UX that causes users to churnA major problem for the ubiquity of player ownership in the past was the complexity of its tech. Just to enter into the world of player ownership meant that users had to have a decent understanding of blockchain technology, access to communities through Discord, and be able to store assets in a third party wallet. And that’s before they even get started playing the games or collecting and selling assets. However, things have changed since then.The truth: player ownership can be user-friendly, familiar, and easy to navigatePlayer ownership has had its UX growing pains - but that’s changed. As the technology matures, many developers have found new and better ways to offer users entry into player ownership.New solutions have made it easy to integrate player ownership, mint assets, and enable trading directly in your games. And these advancements have created a better UX for users: marketplaces, apps, wallets, and platforms are now familiar (they look and feel like apps users already have experience with). Users can now get ownership over their assets, collect, and trade them as easily as browsing Instagram or shopping on Amazon.Myth: users don’t understand the benefits of player ownershipA concern we hear from some developers is that despite the clear value in player ownership, they fear that users are intimidated by and mistrustful of the technology. But, in reality, users (particularly gamers) have been finding ways to create player ownership for a long time.The truth: many users are already finding ways to trade and collect digital assetsWe know how a lot of players feel about player ownership. We asked them. But even without those insights, there’s a tremendous amount of proof that many users not only understand player ownership, but are already actively seeking it out and creating it for themselves. From Diablo, to Counterstrike, Fortnite, and much more, collectible gaming marketplaces persist and have huge followings. It makes sense - gamers are often collectors, and the same drive that compels them to catch every Pokémon translates directly into player ownership in games, too.In this context, player ownership isn’t a leap of faith - it’s the next step. P&O provides the same trading, collecting, and community-building that many users want, but makes it even easier to access and trust. المصدر: https://unity.com/blog/po-fundamentals-busting-the-myths-surrounding-player-ownership #PampampO #fundamentals #busting #the #myths #surrounding #player #ownership
    UNITY.COM
    P&O fundamentals: busting the myths surrounding player ownership
    Play and Own (P&O) gives your users ownership over their digital assets, turning those assets into collectibles and enabling them to get even more value through secondary markets. For you, this could mean a new way to unlock revenue streams, acquire the right users, and build a community around your games.Recently, we conducted a survey to find out what gamers think about P&O. The answers we got back were clear - a majority of users we surveyed want player ownership and say they are even willing to pay more to have it in their games. Yet, there remain some sticky myths surrounding P&O that are holding many developers back from mass adoption.To bust these myths and create a more accurate picture of what player ownership is, below are the most common misconceptions we hear, and the truth for each.Myth: the tech requirements needed to enable P&O are too demanding for most developersBehind this myth is the idea that to integrate player ownership into your titles, you need to have a technical team that’s proficient and experienced in blockchain coding (the tech, in part, at the foundation of P&O). In other words, you need to know how to build a decentralized app from scratch.The truth: you can integrate player ownership into your games with no technical blockchain know-howIt may have been true in the past that you needed a highly-skilled technical team to integrate decentralized assets into your games, and support their management. But these days, there are solutions - like Astra - that do the heavy lifting for you. They can take care of the smart contracts - and handle wallet creation, minting your assets, publishing, and balancing the economy of your game. It’s a single, easy-access entry point into P&O without the need for a technical team.Myth: player ownership won’t work on mobileMost users who interact with decentralized apps and player ownership have historically done so through PC - not on mobile. So, the thinking goes, the infrastructure hasn’t been built to accommodate mobile users. On top of that, due to the decentralized nature of the ecosystem, it's harder to regulate - making it difficult to offer titles through mobile app stores.The truth: player ownership isn’t just mobile-friendly, it can be mobile-firstWhile it may be true that in the past decentralized apps were kept mostly to PCs, that’s no longer the case. Thanks to new innovations enabling studios to give users the benefits of decentralization and player ownership on their mobile devices, the P&O experience isn’t only confined to desktops. Going one step further, there are solutions that are able to leave cryptocurrencies out of the equation and pass all transactions through the IAP mechanisms of mobile app stores - meaning your game can easily start and scale with mobile audiences.Myth: P&O solutions and tech have bad UX that causes users to churnA major problem for the ubiquity of player ownership in the past was the complexity of its tech. Just to enter into the world of player ownership meant that users had to have a decent understanding of blockchain technology, access to communities through Discord, and be able to store assets in a third party wallet. And that’s before they even get started playing the games or collecting and selling assets. However, things have changed since then.The truth: player ownership can be user-friendly, familiar, and easy to navigatePlayer ownership has had its UX growing pains - but that’s changed. As the technology matures, many developers have found new and better ways to offer users entry into player ownership.New solutions have made it easy to integrate player ownership, mint assets, and enable trading directly in your games. And these advancements have created a better UX for users: marketplaces, apps, wallets, and platforms are now familiar (they look and feel like apps users already have experience with). Users can now get ownership over their assets, collect, and trade them as easily as browsing Instagram or shopping on Amazon.Myth: users don’t understand the benefits of player ownershipA concern we hear from some developers is that despite the clear value in player ownership, they fear that users are intimidated by and mistrustful of the technology. But, in reality, users (particularly gamers) have been finding ways to create player ownership for a long time.The truth: many users are already finding ways to trade and collect digital assetsWe know how a lot of players feel about player ownership. We asked them. But even without those insights, there’s a tremendous amount of proof that many users not only understand player ownership, but are already actively seeking it out and creating it for themselves. From Diablo, to Counterstrike, Fortnite, and much more, collectible gaming marketplaces persist and have huge followings. It makes sense - gamers are often collectors, and the same drive that compels them to catch every Pokémon translates directly into player ownership in games, too.In this context, player ownership isn’t a leap of faith - it’s the next step. P&O provides the same trading, collecting, and community-building that many users want, but makes it even easier to access and trust.
    ·68 Ansichten
  • Made with Unity Monthly: May 2023 roundup


    Unite 2023 is coming to Amsterdam, the new LTS is launching, and generative AI continues its buzz. Read on to discover what Unity creators are doing to advance development in the interim, including the latest game releases made with Unity.May saw plenty of exciting games created with Unity share the spotlight.To start, Riot Forge and Double Stallion Games released CONV/RGENCE: A League of Legends Story™, taking us through the streets of Zaun, while tha ltd.’s Humanity put the fate of humankind in our paws. The month also saw Tuatara Games’s hilarious Bare Butt Boxing finally become available in early access, then Plot Twist Games sent us looking for clues in The Last Case of Benedict Fox (below).Rounding out creator milestones for the month were Wishfully’s highly anticipated release of Planet of Lana and Bossa Games’s early look at Lost Skies.We share new game releases and milestone spotlights every Monday on the @UnityGames Twitter and @unitytechnologies Instagram. Be sure to give us a follow and support your fellow creators.Like every month, we were lucky to have another developer take over our Twitter channel to share their best #UnityTips. For May, @samyam_youtube shared a variety of tricks – from pixel art in Unity to simple keyboard shortcuts. Some highlights include:A thread to import your pixel art with the best settingsWhy you should use TryGetComponent instead of GetComponentA guide for the new Unity Input SystemSome quick productivity tipsGreat learning resources and advice for learning UnityOther members of our community also added great tips to the conversation, including @MirzaBeig’s hack for calculating FPS and @kronnect’s useful trick for multiple object positioning.Keep tagging us and using the #UnityTips hashtag to share your expertise with the community.We continue to be stunned by what Unity creators make week to week, and you certainly kept the amazing projects coming in May. If we missed something that you meant to tag us in, be sure to use the #MadeWithUnity hashtag next time.On Twitter, @EvaBalikova was busy creating some beautiful embroidered art (see above), and @DevFatigued’s little caterpillar friend was on a jumping journey.Meanwhile on Instagram, @ShimpleShrimp was in full focus underwater, and Mr. Mustard Games’s (@MrMustardGames on Twitter) robot couldn’t resist smashing some boxes. Finally, @kng_ghidra traveled through different dimensions, and we ended the month with chill skateboarding vibes from @stokedslothinteractive.Finally, for a bit of bonus content, Project Ferocious dev Leo Saalfrank spoke with Shacknews on YouTube about using Unity to the fullest. (For a Ferocious throwback, head to our GDC 2021 showcase for a peek at the WIP.)We’re always here to continue the #MadeWithUnity love. Keep adding the hashtag to your posts to show us what you’ve been up to.On May 25, we hosted a Graphics Dev Blitz Day, covering topics like global illumination, shaders, SRP, URP, HDRP, GfxDevice, texturing, and more. The event was held in both the forums and on the Discord server. Throughout the day, we had more than 150 threads with 49 experts answering questions, and we’d like to thank everyone who participated.Keep an eye on Discord and our forums for future Dev Blitz Day announcements, and don’t forget to bookmark the archive of past Dev Blitz Days.May really took things up a notch on Twitch with the continuation of our Scope Check Let’s Dev series, releasing parts four, five, and six. We also took time to stream a Creator Spotlight showcasing Thomas Waterzooi’s Please, Touch The Artwork (watch above).To close out the month, we hosted Lana Lux on the channel to hear her Unity Tales and held a Nordic Game Jam Let’s Play session.If you don’t already, follow us on Twitch today and hit the notification bell so you never miss a stream.Are you interested in becoming an Asset Store publisher? Maybe you’re a publisher looking to enhance your marketing, community building, or customer support skills? Check out our freshly updated Publisher Resources page for tips on how to turbocharge your publishing journey.Taking things to social media, here’s a roundup of some of our favorite creator showcases from Twitter in May:TOON Farm Pack (coming soon!) | @steve_sicsFast Food Heaven Pack | @NekoboltTeamModern Studio Apartment 3 | NextLevel3DOn YouTube, we shared videos with Renaud Forestié about More Mountains‘s Feel and Freya Holmér about Shapes – two extremely popular assets.Looking to be noticed by the Asset Store team? Tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.For our final update, here’s a non-exhaustive list of games made with Unity that released in May. Do you see any on the list that have already become favorites or think we missed a title? Share your thoughts in the forums.World Turtles, Re: cOg Mission (May 1 – early access)KILLBUG, Samurai Punk and Nicholas McDonnell (May 3)Tape to Tape, Excellent Rectangle (May 3 – early access)Toasterball, Les Crafteurs (May 3)Bare Butt Boxing, Tuatara Games (May 4)Darkest Dungeon® II, Red Hook Studios (May 8)Pan’orama, Chicken Launcher (May 9)Blobi Sprint, ChOuette (May 12)Humanity, tha ltd. (May 15)Tin Hearts, Rogue Sun (May 16)Greedventory, Black Tower Basement (May 17)Inkbound, Shiny Shoe (May 22)Planet of Lana, Wishfully (May 23)CONV/RGENCE: A League of Legends Story™, Double Stallion (May 23)Sunshine Shuffle, Strange Scaffold (May 24)Diluvian Winds, Alambik Studio (May 25 – early access)Evil Wizard, Rubber Duck Games (May 25)Friends vs Friends, Brainwash Gang (May 30)Everdream Valley, Mooneaters (May 30)Doomblade, Muro Studios (May 31)If you’re creating with Unity and haven’t seen your projects in any of our monthly roundups, submit here for the chance to be featured.That’s a wrap for May. For more community news as it happens, follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch.

    المصدر: https://unity.com/blog/news/made-with-unity-monthly-may-2023-roundup
    Made with Unity Monthly: May 2023 roundup
    Unite 2023 is coming to Amsterdam, the new LTS is launching, and generative AI continues its buzz. Read on to discover what Unity creators are doing to advance development in the interim, including the latest game releases made with Unity.May saw plenty of exciting games created with Unity share the spotlight.To start, Riot Forge and Double Stallion Games released CONV/RGENCE: A League of Legends Story™, taking us through the streets of Zaun, while tha ltd.’s Humanity put the fate of humankind in our paws. The month also saw Tuatara Games’s hilarious Bare Butt Boxing finally become available in early access, then Plot Twist Games sent us looking for clues in The Last Case of Benedict Fox (below).Rounding out creator milestones for the month were Wishfully’s highly anticipated release of Planet of Lana and Bossa Games’s early look at Lost Skies.We share new game releases and milestone spotlights every Monday on the @UnityGames Twitter and @unitytechnologies Instagram. Be sure to give us a follow and support your fellow creators.Like every month, we were lucky to have another developer take over our Twitter channel to share their best #UnityTips. For May, @samyam_youtube shared a variety of tricks – from pixel art in Unity to simple keyboard shortcuts. Some highlights include:A thread to import your pixel art with the best settingsWhy you should use TryGetComponent instead of GetComponentA guide for the new Unity Input SystemSome quick productivity tipsGreat learning resources and advice for learning UnityOther members of our community also added great tips to the conversation, including @MirzaBeig’s hack for calculating FPS and @kronnect’s useful trick for multiple object positioning.Keep tagging us and using the #UnityTips hashtag to share your expertise with the community.We continue to be stunned by what Unity creators make week to week, and you certainly kept the amazing projects coming in May. If we missed something that you meant to tag us in, be sure to use the #MadeWithUnity hashtag next time.On Twitter, @EvaBalikova was busy creating some beautiful embroidered art (see above), and @DevFatigued’s little caterpillar friend was on a jumping journey.Meanwhile on Instagram, @ShimpleShrimp was in full focus underwater, and Mr. Mustard Games’s (@MrMustardGames on Twitter) robot couldn’t resist smashing some boxes. Finally, @kng_ghidra traveled through different dimensions, and we ended the month with chill skateboarding vibes from @stokedslothinteractive.Finally, for a bit of bonus content, Project Ferocious dev Leo Saalfrank spoke with Shacknews on YouTube about using Unity to the fullest. (For a Ferocious throwback, head to our GDC 2021 showcase for a peek at the WIP.)We’re always here to continue the #MadeWithUnity love. Keep adding the hashtag to your posts to show us what you’ve been up to.On May 25, we hosted a Graphics Dev Blitz Day, covering topics like global illumination, shaders, SRP, URP, HDRP, GfxDevice, texturing, and more. The event was held in both the forums and on the Discord server. Throughout the day, we had more than 150 threads with 49 experts answering questions, and we’d like to thank everyone who participated.Keep an eye on Discord and our forums for future Dev Blitz Day announcements, and don’t forget to bookmark the archive of past Dev Blitz Days.May really took things up a notch on Twitch with the continuation of our Scope Check Let’s Dev series, releasing parts four, five, and six. We also took time to stream a Creator Spotlight showcasing Thomas Waterzooi’s Please, Touch The Artwork (watch above).To close out the month, we hosted Lana Lux on the channel to hear her Unity Tales and held a Nordic Game Jam Let’s Play session.If you don’t already, follow us on Twitch today and hit the notification bell so you never miss a stream.Are you interested in becoming an Asset Store publisher? Maybe you’re a publisher looking to enhance your marketing, community building, or customer support skills? Check out our freshly updated Publisher Resources page for tips on how to turbocharge your publishing journey.Taking things to social media, here’s a roundup of some of our favorite creator showcases from Twitter in May:TOON Farm Pack (coming soon!) | @steve_sicsFast Food Heaven Pack | @NekoboltTeamModern Studio Apartment 3 | NextLevel3DOn YouTube, we shared videos with Renaud Forestié about More Mountains‘s Feel and Freya Holmér about Shapes – two extremely popular assets.Looking to be noticed by the Asset Store team? Tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.For our final update, here’s a non-exhaustive list of games made with Unity that released in May. Do you see any on the list that have already become favorites or think we missed a title? Share your thoughts in the forums.World Turtles, Re: cOg Mission (May 1 – early access)KILLBUG, Samurai Punk and Nicholas McDonnell (May 3)Tape to Tape, Excellent Rectangle (May 3 – early access)Toasterball, Les Crafteurs (May 3)Bare Butt Boxing, Tuatara Games (May 4)Darkest Dungeon® II, Red Hook Studios (May 8)Pan’orama, Chicken Launcher (May 9)Blobi Sprint, ChOuette (May 12)Humanity, tha ltd. (May 15)Tin Hearts, Rogue Sun (May 16)Greedventory, Black Tower Basement (May 17)Inkbound, Shiny Shoe (May 22)Planet of Lana, Wishfully (May 23)CONV/RGENCE: A League of Legends Story™, Double Stallion (May 23)Sunshine Shuffle, Strange Scaffold (May 24)Diluvian Winds, Alambik Studio (May 25 – early access)Evil Wizard, Rubber Duck Games (May 25)Friends vs Friends, Brainwash Gang (May 30)Everdream Valley, Mooneaters (May 30)Doomblade, Muro Studios (May 31)If you’re creating with Unity and haven’t seen your projects in any of our monthly roundups, submit here for the chance to be featured.That’s a wrap for May. For more community news as it happens, follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch. المصدر: https://unity.com/blog/news/made-with-unity-monthly-may-2023-roundup
    UNITY.COM
    Made with Unity Monthly: May 2023 roundup
    Unite 2023 is coming to Amsterdam, the new LTS is launching, and generative AI continues its buzz. Read on to discover what Unity creators are doing to advance development in the interim, including the latest game releases made with Unity.May saw plenty of exciting games created with Unity share the spotlight.To start, Riot Forge and Double Stallion Games released CONV/RGENCE: A League of Legends Story™, taking us through the streets of Zaun, while tha ltd.’s Humanity put the fate of humankind in our paws. The month also saw Tuatara Games’s hilarious Bare Butt Boxing finally become available in early access, then Plot Twist Games sent us looking for clues in The Last Case of Benedict Fox (below).Rounding out creator milestones for the month were Wishfully’s highly anticipated release of Planet of Lana and Bossa Games’s early look at Lost Skies.We share new game releases and milestone spotlights every Monday on the @UnityGames Twitter and @unitytechnologies Instagram. Be sure to give us a follow and support your fellow creators.Like every month, we were lucky to have another developer take over our Twitter channel to share their best #UnityTips. For May, @samyam_youtube shared a variety of tricks – from pixel art in Unity to simple keyboard shortcuts. Some highlights include:A thread to import your pixel art with the best settingsWhy you should use TryGetComponent instead of GetComponentA guide for the new Unity Input SystemSome quick productivity tipsGreat learning resources and advice for learning UnityOther members of our community also added great tips to the conversation, including @MirzaBeig’s hack for calculating FPS and @kronnect’s useful trick for multiple object positioning.Keep tagging us and using the #UnityTips hashtag to share your expertise with the community.We continue to be stunned by what Unity creators make week to week, and you certainly kept the amazing projects coming in May. If we missed something that you meant to tag us in, be sure to use the #MadeWithUnity hashtag next time.On Twitter, @EvaBalikova was busy creating some beautiful embroidered art (see above), and @DevFatigued’s little caterpillar friend was on a jumping journey.Meanwhile on Instagram, @ShimpleShrimp was in full focus underwater, and Mr. Mustard Games’s (@MrMustardGames on Twitter) robot couldn’t resist smashing some boxes. Finally, @kng_ghidra traveled through different dimensions, and we ended the month with chill skateboarding vibes from @stokedslothinteractive.Finally, for a bit of bonus content, Project Ferocious dev Leo Saalfrank spoke with Shacknews on YouTube about using Unity to the fullest. (For a Ferocious throwback, head to our GDC 2021 showcase for a peek at the WIP.)We’re always here to continue the #MadeWithUnity love. Keep adding the hashtag to your posts to show us what you’ve been up to.On May 25, we hosted a Graphics Dev Blitz Day, covering topics like global illumination, shaders, SRP, URP, HDRP, GfxDevice, texturing, and more. The event was held in both the forums and on the Discord server. Throughout the day, we had more than 150 threads with 49 experts answering questions, and we’d like to thank everyone who participated.Keep an eye on Discord and our forums for future Dev Blitz Day announcements, and don’t forget to bookmark the archive of past Dev Blitz Days.May really took things up a notch on Twitch with the continuation of our Scope Check Let’s Dev series, releasing parts four, five, and six. We also took time to stream a Creator Spotlight showcasing Thomas Waterzooi’s Please, Touch The Artwork (watch above).To close out the month, we hosted Lana Lux on the channel to hear her Unity Tales and held a Nordic Game Jam Let’s Play session.If you don’t already, follow us on Twitch today and hit the notification bell so you never miss a stream.Are you interested in becoming an Asset Store publisher? Maybe you’re a publisher looking to enhance your marketing, community building, or customer support skills? Check out our freshly updated Publisher Resources page for tips on how to turbocharge your publishing journey.Taking things to social media, here’s a roundup of some of our favorite creator showcases from Twitter in May:TOON Farm Pack (coming soon!) | @steve_sicsFast Food Heaven Pack | @NekoboltTeamModern Studio Apartment 3 | NextLevel3DOn YouTube, we shared videos with Renaud Forestié about More Mountains‘s Feel and Freya Holmér about Shapes – two extremely popular assets.Looking to be noticed by the Asset Store team? Tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.For our final update, here’s a non-exhaustive list of games made with Unity that released in May. Do you see any on the list that have already become favorites or think we missed a title? Share your thoughts in the forums.World Turtles, Re: cOg Mission (May 1 – early access)KILLBUG, Samurai Punk and Nicholas McDonnell (May 3)Tape to Tape, Excellent Rectangle (May 3 – early access)Toasterball, Les Crafteurs (May 3)Bare Butt Boxing, Tuatara Games (May 4)Darkest Dungeon® II, Red Hook Studios (May 8)Pan’orama, Chicken Launcher (May 9)Blobi Sprint, ChOuette (May 12)Humanity, tha ltd. (May 15)Tin Hearts, Rogue Sun (May 16)Greedventory, Black Tower Basement (May 17)Inkbound, Shiny Shoe (May 22)Planet of Lana, Wishfully (May 23)CONV/RGENCE: A League of Legends Story™, Double Stallion (May 23)Sunshine Shuffle, Strange Scaffold (May 24)Diluvian Winds, Alambik Studio (May 25 – early access)Evil Wizard, Rubber Duck Games (May 25)Friends vs Friends, Brainwash Gang (May 30)Everdream Valley, Mooneaters (May 30)Doomblade, Muro Studios (May 31)If you’re creating with Unity and haven’t seen your projects in any of our monthly roundups, submit here for the chance to be featured.That’s a wrap for May. For more community news as it happens, follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch.
    ·108 Ansichten
  • Suchergebnis