• An Artist Spots Her Work In Bungie's Marathon, Nintendo Isn't Sending Out Switch 2 Review Units Until The Last Minute, And More Of The Week's Top Stories

    I don’t go here, but I’m obsessed with the new Final Fantasy art when I see it. Something I also appreciate is a good bit, and the Magic x Final Fantasy collab handled one of the series’ long-running constants in a truly clever way that even I, someone who doesn’t understand the ruleset of the card game, can appreciate. I’m talking about Cid, a recurring name given to different characters in each mainline Final Fantasy. - Kenneth Shepard Read More
    #artist #spots #her #work #bungie039s
    An Artist Spots Her Work In Bungie's Marathon, Nintendo Isn't Sending Out Switch 2 Review Units Until The Last Minute, And More Of The Week's Top Stories
    I don’t go here, but I’m obsessed with the new Final Fantasy art when I see it. Something I also appreciate is a good bit, and the Magic x Final Fantasy collab handled one of the series’ long-running constants in a truly clever way that even I, someone who doesn’t understand the ruleset of the card game, can appreciate. I’m talking about Cid, a recurring name given to different characters in each mainline Final Fantasy. - Kenneth Shepard Read More #artist #spots #her #work #bungie039s
    An Artist Spots Her Work In Bungie's Marathon, Nintendo Isn't Sending Out Switch 2 Review Units Until The Last Minute, And More Of The Week's Top Stories
    kotaku.com
    I don’t go here (Magic: The Gathering, that is), but I’m obsessed with the new Final Fantasy art when I see it. Something I also appreciate is a good bit, and the Magic x Final Fantasy collab handled one of the series’ long-running constants in a truly clever way that even I, someone who doesn’t understand the ruleset of the card game, can appreciate. I’m talking about Cid, a recurring name given to different characters in each mainline Final Fantasy. - Kenneth Shepard Read More
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Improving job system performance scaling in 2022.2 – part 1: Background and API

    In 2022.2 and 2021.3.14f1, we’ve improved the scheduling cost and performance scaling of the Unity job system. In this two-part article, I’ll offer a brief recap of parallel programming and job systems, discuss job system overhead, and share Unity’s approach to mitigating it.In part one, we cover background information on parallel programming and the job system API. If you’re already familiar with parallelism, feel free to skim and skip to part two.In the 2017.3 release, a public C# API was added for the internal C++ Unity job system, allowing users to write small functions called “jobs” which are executed asynchronously. The intention behind using jobs instead of plain old functions is to provide an API that makes it easy, safe, and efficient to allow code that would otherwise run on the main thread to instead run on job “worker” threads, ideally in parallel. This helps to reduce the overall amount of wall time the main thread needs to complete a game’s simulation. Using the job system for your CPU work can provide significant performance improvements and allow your game’s performance to scale naturally as the hardware your game runs on improves.If you think of computation as a finite resource, a single CPU core can only do so much computational “work” in a given period of time. For example, if a single threaded game needs its simulation Updateto take no more than 16ms, but it currently takes 24ms, then the CPU has too much work to do – more time is needed. In order to hit a target of 16ms, there are only two options: make the CPU go faster, or do less work.Ultimately, you need to eliminate 8ms of computational work.That typically means improving algorithms, spreading subsystem work across multiple frames, removing redundant work that can accumulate during development, etc. If this still doesn’t get you to your performance target, you may need to reduce game simulation complexity by cutting content and gameplay, for example, by reducing the number of enemies allowed to be spawned at once – which is certainly not ideal.What if, instead of eliminating work, we give the work to another CPU core to run on? Nowadays, most CPUs are multi-core, which means the available single-threaded computational power can be multiplied by the number of cores the CPU has. If we could magically and safely divide all the work currently in the Updatefunction between two CPU cores, the 24ms Updatework could be run in two simultaneous 12ms chunks. This would get us well below the target of 16ms. Further, if we could divide the work into four parallel chunks and run them on four cores, then the Updatewould take only 6ms!This type of work division and running on all available cores is known as performance scaling. If you add more cores, you can ideally run more work in parallel, reducing the wall time of the Updatewithout code changes.Alas, this is fantasy. Nothing is going to divide the Updatefunction into pieces and run them on separate cores without some help. Even if we switched to a CPU with 128 cores, the 24ms Updateabove will still take 24ms, provided both CPUs have the same clock rate. What a waste of potential! How, then, can we write applications to take advantage of all available CPU cores and increase parallelism?One approach is multithreading. That is, your program creates threads to run a function which the operating system will schedule to run for you. If your CPU has multiple cores, then multiple threads can run at the same time, each on their own core. If there are more threads than available cores, the operating system is responsible for determining which thread gets to run on a core – and for how long – before it switches to another thread, a process called context switching.Multithreaded programming comes with a bunch of complications, however. In the magical scenario above, the Updatefunction was evenly divided into four partial updates. But in reality, you likely wouldn’t be able to do something so simple. Since the threads will run simultaneously, you need to be careful when they read and write to the same data at the same time, in order to keep them from corrupting each other’s calculations.This usually involves using locking synchronization primitives, like a mutex or semaphore, to control access to shared state between threads. These primitives usually limit how much parallelism specific sections of code can haveby “locking” other threads, preventing them from running the section until the lock holder is done and “unlocks” the section for any waiting threads. This reduces how much performance you get by using multiple threads since you aren’t running in parallel all the time, but it does ensure programs remain correct.It also likely doesn’t make sense to run some parts of your update in parallel due to data dependencies. For example, almost all games need to read input from a controller, store that input in an input buffer, and then read the input buffer and react based on the values.It wouldn’t make sense to have code reading the input buffer to decide if a character should jump executing at the same time as the code writing to the input buffer for that frame’s update. Even if you used a mutex to make sure reading and writing to m_InputBuffer was safe, you always want m_InputBuffer to be written to first and then the m_InputBuffer reading code to run second, so you know whether the jump button was pressed for the current frame. Such data dependencies are common and normal, but will decrease the amount of parallelism possible.There are many approaches to writing a multithreaded program. You can use platform-specific APIs for creating and managing threads directly, or use various APIs that provide an abstraction to help manage some of the complications of multithreaded programming.A job system is one such abstraction. It provides the means to break up parts of your single-threaded code into logical blocks, isolate what data is needed by that code, control who accesses that data simultaneously, and run as many blocks of code in parallel as possible to try and utilize all computational power available on the CPU as needed.Today, we cannot divide arbitrary functions into pieces automatically, so Unity provides a job API that enables users to convert functions into small logical blocks. From there, the job system takes care of making those pieces run in parallel.The job system is made up of a few core components:JobsJob handlesJob schedulerAs mentioned before, a job is just a function and some data, but this encapsulation is useful, as it reduces the scope of which specific data the job will read from or write to.Once a job instance is created, it needs to be scheduled with the job system. This is done with the .Schedulemethod added to all job types via C#’s extension mechanism. To identify and keep track of the scheduled job, a JobHandle is provided.Since job handles identify scheduled jobs, they can be used to set up job dependencies. Job dependencies guarantee that a scheduled job won’t start executing until its dependencies have completed. As a direct result, they also tell us when different jobs are allowed to run in parallel by creating a directed acyclic job graph.Finally, as jobs are scheduled, the job scheduler is responsible for keeping track of scheduled jobsand ensuring jobs start running as quickly as possible. How this is done is important, as the design and usage patterns of the job system can potentially conflict in non-obvious ways, leading to overhead costs that eat into the performance gains of multithreaded programming. As users started adopting the C# job system, we began to see scenarios where job system overhead was higher than we’d like, which led to the improvements to Unity’s internal job system implementation in the 2022.2 Tech Stream.Stay tuned for part two, which will explore where overhead in the C# job system comes from and how it has been reduced in Unity 2022.2.If you have questions or want to learn more, visit us in the C# Job System forum. You can also connect with me directly through the Unity Discord at username @Antifreeze#2763. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.
    #improving #job #system #performance #scaling
    Improving job system performance scaling in 2022.2 – part 1: Background and API
    In 2022.2 and 2021.3.14f1, we’ve improved the scheduling cost and performance scaling of the Unity job system. In this two-part article, I’ll offer a brief recap of parallel programming and job systems, discuss job system overhead, and share Unity’s approach to mitigating it.In part one, we cover background information on parallel programming and the job system API. If you’re already familiar with parallelism, feel free to skim and skip to part two.In the 2017.3 release, a public C# API was added for the internal C++ Unity job system, allowing users to write small functions called “jobs” which are executed asynchronously. The intention behind using jobs instead of plain old functions is to provide an API that makes it easy, safe, and efficient to allow code that would otherwise run on the main thread to instead run on job “worker” threads, ideally in parallel. This helps to reduce the overall amount of wall time the main thread needs to complete a game’s simulation. Using the job system for your CPU work can provide significant performance improvements and allow your game’s performance to scale naturally as the hardware your game runs on improves.If you think of computation as a finite resource, a single CPU core can only do so much computational “work” in a given period of time. For example, if a single threaded game needs its simulation Updateto take no more than 16ms, but it currently takes 24ms, then the CPU has too much work to do – more time is needed. In order to hit a target of 16ms, there are only two options: make the CPU go faster, or do less work.Ultimately, you need to eliminate 8ms of computational work.That typically means improving algorithms, spreading subsystem work across multiple frames, removing redundant work that can accumulate during development, etc. If this still doesn’t get you to your performance target, you may need to reduce game simulation complexity by cutting content and gameplay, for example, by reducing the number of enemies allowed to be spawned at once – which is certainly not ideal.What if, instead of eliminating work, we give the work to another CPU core to run on? Nowadays, most CPUs are multi-core, which means the available single-threaded computational power can be multiplied by the number of cores the CPU has. If we could magically and safely divide all the work currently in the Updatefunction between two CPU cores, the 24ms Updatework could be run in two simultaneous 12ms chunks. This would get us well below the target of 16ms. Further, if we could divide the work into four parallel chunks and run them on four cores, then the Updatewould take only 6ms!This type of work division and running on all available cores is known as performance scaling. If you add more cores, you can ideally run more work in parallel, reducing the wall time of the Updatewithout code changes.Alas, this is fantasy. Nothing is going to divide the Updatefunction into pieces and run them on separate cores without some help. Even if we switched to a CPU with 128 cores, the 24ms Updateabove will still take 24ms, provided both CPUs have the same clock rate. What a waste of potential! How, then, can we write applications to take advantage of all available CPU cores and increase parallelism?One approach is multithreading. That is, your program creates threads to run a function which the operating system will schedule to run for you. If your CPU has multiple cores, then multiple threads can run at the same time, each on their own core. If there are more threads than available cores, the operating system is responsible for determining which thread gets to run on a core – and for how long – before it switches to another thread, a process called context switching.Multithreaded programming comes with a bunch of complications, however. In the magical scenario above, the Updatefunction was evenly divided into four partial updates. But in reality, you likely wouldn’t be able to do something so simple. Since the threads will run simultaneously, you need to be careful when they read and write to the same data at the same time, in order to keep them from corrupting each other’s calculations.This usually involves using locking synchronization primitives, like a mutex or semaphore, to control access to shared state between threads. These primitives usually limit how much parallelism specific sections of code can haveby “locking” other threads, preventing them from running the section until the lock holder is done and “unlocks” the section for any waiting threads. This reduces how much performance you get by using multiple threads since you aren’t running in parallel all the time, but it does ensure programs remain correct.It also likely doesn’t make sense to run some parts of your update in parallel due to data dependencies. For example, almost all games need to read input from a controller, store that input in an input buffer, and then read the input buffer and react based on the values.It wouldn’t make sense to have code reading the input buffer to decide if a character should jump executing at the same time as the code writing to the input buffer for that frame’s update. Even if you used a mutex to make sure reading and writing to m_InputBuffer was safe, you always want m_InputBuffer to be written to first and then the m_InputBuffer reading code to run second, so you know whether the jump button was pressed for the current frame. Such data dependencies are common and normal, but will decrease the amount of parallelism possible.There are many approaches to writing a multithreaded program. You can use platform-specific APIs for creating and managing threads directly, or use various APIs that provide an abstraction to help manage some of the complications of multithreaded programming.A job system is one such abstraction. It provides the means to break up parts of your single-threaded code into logical blocks, isolate what data is needed by that code, control who accesses that data simultaneously, and run as many blocks of code in parallel as possible to try and utilize all computational power available on the CPU as needed.Today, we cannot divide arbitrary functions into pieces automatically, so Unity provides a job API that enables users to convert functions into small logical blocks. From there, the job system takes care of making those pieces run in parallel.The job system is made up of a few core components:JobsJob handlesJob schedulerAs mentioned before, a job is just a function and some data, but this encapsulation is useful, as it reduces the scope of which specific data the job will read from or write to.Once a job instance is created, it needs to be scheduled with the job system. This is done with the .Schedulemethod added to all job types via C#’s extension mechanism. To identify and keep track of the scheduled job, a JobHandle is provided.Since job handles identify scheduled jobs, they can be used to set up job dependencies. Job dependencies guarantee that a scheduled job won’t start executing until its dependencies have completed. As a direct result, they also tell us when different jobs are allowed to run in parallel by creating a directed acyclic job graph.Finally, as jobs are scheduled, the job scheduler is responsible for keeping track of scheduled jobsand ensuring jobs start running as quickly as possible. How this is done is important, as the design and usage patterns of the job system can potentially conflict in non-obvious ways, leading to overhead costs that eat into the performance gains of multithreaded programming. As users started adopting the C# job system, we began to see scenarios where job system overhead was higher than we’d like, which led to the improvements to Unity’s internal job system implementation in the 2022.2 Tech Stream.Stay tuned for part two, which will explore where overhead in the C# job system comes from and how it has been reduced in Unity 2022.2.If you have questions or want to learn more, visit us in the C# Job System forum. You can also connect with me directly through the Unity Discord at username @Antifreeze#2763. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series. #improving #job #system #performance #scaling
    Improving job system performance scaling in 2022.2 – part 1: Background and API
    unity.com
    In 2022.2 and 2021.3.14f1, we’ve improved the scheduling cost and performance scaling of the Unity job system. In this two-part article, I’ll offer a brief recap of parallel programming and job systems, discuss job system overhead, and share Unity’s approach to mitigating it.In part one, we cover background information on parallel programming and the job system API. If you’re already familiar with parallelism, feel free to skim and skip to part two.In the 2017.3 release, a public C# API was added for the internal C++ Unity job system, allowing users to write small functions called “jobs” which are executed asynchronously. The intention behind using jobs instead of plain old functions is to provide an API that makes it easy, safe, and efficient to allow code that would otherwise run on the main thread to instead run on job “worker” threads, ideally in parallel. This helps to reduce the overall amount of wall time the main thread needs to complete a game’s simulation. Using the job system for your CPU work can provide significant performance improvements and allow your game’s performance to scale naturally as the hardware your game runs on improves.If you think of computation as a finite resource, a single CPU core can only do so much computational “work” in a given period of time. For example, if a single threaded game needs its simulation Update() to take no more than 16ms, but it currently takes 24ms, then the CPU has too much work to do – more time is needed. In order to hit a target of 16ms, there are only two options: make the CPU go faster (e.g., raise the minimum specs for your game – normally not a great option), or do less work.Ultimately, you need to eliminate 8ms of computational work.That typically means improving algorithms, spreading subsystem work across multiple frames, removing redundant work that can accumulate during development, etc. If this still doesn’t get you to your performance target, you may need to reduce game simulation complexity by cutting content and gameplay, for example, by reducing the number of enemies allowed to be spawned at once – which is certainly not ideal.What if, instead of eliminating work, we give the work to another CPU core to run on? Nowadays, most CPUs are multi-core, which means the available single-threaded computational power can be multiplied by the number of cores the CPU has. If we could magically and safely divide all the work currently in the Update() function between two CPU cores, the 24ms Update() work could be run in two simultaneous 12ms chunks. This would get us well below the target of 16ms. Further, if we could divide the work into four parallel chunks and run them on four cores, then the Update() would take only 6ms!This type of work division and running on all available cores is known as performance scaling. If you add more cores, you can ideally run more work in parallel, reducing the wall time of the Update() without code changes.Alas, this is fantasy. Nothing is going to divide the Update() function into pieces and run them on separate cores without some help. Even if we switched to a CPU with 128 cores, the 24ms Update() above will still take 24ms, provided both CPUs have the same clock rate. What a waste of potential! How, then, can we write applications to take advantage of all available CPU cores and increase parallelism?One approach is multithreading. That is, your program creates threads to run a function which the operating system will schedule to run for you. If your CPU has multiple cores, then multiple threads can run at the same time, each on their own core. If there are more threads than available cores, the operating system is responsible for determining which thread gets to run on a core – and for how long – before it switches to another thread, a process called context switching.Multithreaded programming comes with a bunch of complications, however. In the magical scenario above, the Update() function was evenly divided into four partial updates. But in reality, you likely wouldn’t be able to do something so simple. Since the threads will run simultaneously, you need to be careful when they read and write to the same data at the same time, in order to keep them from corrupting each other’s calculations.This usually involves using locking synchronization primitives, like a mutex or semaphore, to control access to shared state between threads. These primitives usually limit how much parallelism specific sections of code can have (usually opting for none at all) by “locking” other threads, preventing them from running the section until the lock holder is done and “unlocks” the section for any waiting threads. This reduces how much performance you get by using multiple threads since you aren’t running in parallel all the time, but it does ensure programs remain correct.It also likely doesn’t make sense to run some parts of your update in parallel due to data dependencies. For example, almost all games need to read input from a controller, store that input in an input buffer, and then read the input buffer and react based on the values.It wouldn’t make sense to have code reading the input buffer to decide if a character should jump executing at the same time as the code writing to the input buffer for that frame’s update. Even if you used a mutex to make sure reading and writing to m_InputBuffer was safe, you always want m_InputBuffer to be written to first and then the m_InputBuffer reading code to run second, so you know whether the jump button was pressed for the current frame (and not one in the past). Such data dependencies are common and normal, but will decrease the amount of parallelism possible.There are many approaches to writing a multithreaded program. You can use platform-specific APIs for creating and managing threads directly, or use various APIs that provide an abstraction to help manage some of the complications of multithreaded programming.A job system is one such abstraction. It provides the means to break up parts of your single-threaded code into logical blocks, isolate what data is needed by that code, control who accesses that data simultaneously, and run as many blocks of code in parallel as possible to try and utilize all computational power available on the CPU as needed.Today, we cannot divide arbitrary functions into pieces automatically, so Unity provides a job API that enables users to convert functions into small logical blocks. From there, the job system takes care of making those pieces run in parallel.The job system is made up of a few core components:JobsJob handlesJob schedulerAs mentioned before, a job is just a function and some data, but this encapsulation is useful, as it reduces the scope of which specific data the job will read from or write to.Once a job instance is created, it needs to be scheduled with the job system. This is done with the .Schedule() method added to all job types via C#’s extension mechanism. To identify and keep track of the scheduled job, a JobHandle is provided.Since job handles identify scheduled jobs, they can be used to set up job dependencies. Job dependencies guarantee that a scheduled job won’t start executing until its dependencies have completed. As a direct result, they also tell us when different jobs are allowed to run in parallel by creating a directed acyclic job graph.Finally, as jobs are scheduled, the job scheduler is responsible for keeping track of scheduled jobs (mapping JobHandles to the job instances scheduled) and ensuring jobs start running as quickly as possible. How this is done is important, as the design and usage patterns of the job system can potentially conflict in non-obvious ways, leading to overhead costs that eat into the performance gains of multithreaded programming. As users started adopting the C# job system, we began to see scenarios where job system overhead was higher than we’d like, which led to the improvements to Unity’s internal job system implementation in the 2022.2 Tech Stream.Stay tuned for part two, which will explore where overhead in the C# job system comes from and how it has been reduced in Unity 2022.2.If you have questions or want to learn more, visit us in the C# Job System forum. You can also connect with me directly through the Unity Discord at username @Antifreeze#2763. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • DeepMind’s AlphaEvolve AI: History In The Making!

    DeepMind’s AlphaEvolve AI: History In The Making!
    #deepminds #alphaevolve #history #making
    DeepMind’s AlphaEvolve AI: History In The Making!
    DeepMind’s AlphaEvolve AI: History In The Making! #deepminds #alphaevolve #history #making
    DeepMind’s AlphaEvolve AI: History In The Making!
    www.youtube.com
    DeepMind’s AlphaEvolve AI: History In The Making!
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Unlike Elon Musks X, Metas Threads is prioritizing links

    If a user publishes a post with a link to a website or article on Elon Musk's X, very often they'll find that the post receives minimal views, retweets, and likes. Musk has previously confirmed that his social media platform, formerly known as Twitter, deprioritizes links via its algorithm.X users have complained about this, but Musk wants people to spend the maximum amount of time on his platform. However, it appears one of X's competitors now views this as an opportunity.Meta's Threads will now prioritize links on the platform in a few major ways.

    You May Also Like

    Share more links in your Threads bioIn an update this week, Meta announced that Threads will now allow users to place up to 5 links in their bio. For comparison, X only allows users to post a single link in the "website" section of a user's profile page. 

    Credit: Meta

    By allowing multiple links, Threads allows users to share more of their personal websites and projects with people on the platform. This might also help users circumvent the need to use link-in-bio services like Linktree to share multiple links on their Threads profile page.

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Threads link analyticsFurthermore, Meta wants its users to know just how much traffic Threads sends from links posted on the platform.

    Credit: Meta

    The company announced that Threads will now profile users with link analytics that present data regarding the number of clicks that links receive via Threads. The platform will provide users with this information for links posted to both a user's bio and in their posts.Link recommendations in ThreadsWhile Instagram head Adam Mosseri previously claimed Threads didn't specifically downrank posts with links, Mosseri also said the platform didn't place much value on them either. 

    Related Stories

    That's about to change, too, according to Engadget. Threads will now start promoting posts that include links at a higher rate than before via its recommendation algorithm.Meta has started prioritizing content creators on all of its platforms in recent months. Prioritizing links seems like a big step in that direction as creators look to use social media platforms to promote their work, regardless of where it's posted.We'll soon find out if these changes help Threads win over creators over its competitors like Musk's X and Bluesky.

    Topics
    X/Twitter
    Meta
    #unlike #elon #musks #metas #threads
    Unlike Elon Musks X, Metas Threads is prioritizing links
    If a user publishes a post with a link to a website or article on Elon Musk's X, very often they'll find that the post receives minimal views, retweets, and likes. Musk has previously confirmed that his social media platform, formerly known as Twitter, deprioritizes links via its algorithm.X users have complained about this, but Musk wants people to spend the maximum amount of time on his platform. However, it appears one of X's competitors now views this as an opportunity.Meta's Threads will now prioritize links on the platform in a few major ways. You May Also Like Share more links in your Threads bioIn an update this week, Meta announced that Threads will now allow users to place up to 5 links in their bio. For comparison, X only allows users to post a single link in the "website" section of a user's profile page.  Credit: Meta By allowing multiple links, Threads allows users to share more of their personal websites and projects with people on the platform. This might also help users circumvent the need to use link-in-bio services like Linktree to share multiple links on their Threads profile page. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Threads link analyticsFurthermore, Meta wants its users to know just how much traffic Threads sends from links posted on the platform. Credit: Meta The company announced that Threads will now profile users with link analytics that present data regarding the number of clicks that links receive via Threads. The platform will provide users with this information for links posted to both a user's bio and in their posts.Link recommendations in ThreadsWhile Instagram head Adam Mosseri previously claimed Threads didn't specifically downrank posts with links, Mosseri also said the platform didn't place much value on them either.  Related Stories That's about to change, too, according to Engadget. Threads will now start promoting posts that include links at a higher rate than before via its recommendation algorithm.Meta has started prioritizing content creators on all of its platforms in recent months. Prioritizing links seems like a big step in that direction as creators look to use social media platforms to promote their work, regardless of where it's posted.We'll soon find out if these changes help Threads win over creators over its competitors like Musk's X and Bluesky. Topics X/Twitter Meta #unlike #elon #musks #metas #threads
    Unlike Elon Musks X, Metas Threads is prioritizing links
    mashable.com
    If a user publishes a post with a link to a website or article on Elon Musk's X, very often they'll find that the post receives minimal views, retweets, and likes. Musk has previously confirmed that his social media platform, formerly known as Twitter, deprioritizes links via its algorithm.X users have complained about this, but Musk wants people to spend the maximum amount of time on his platform. However, it appears one of X's competitors now views this as an opportunity.Meta's Threads will now prioritize links on the platform in a few major ways. You May Also Like Share more links in your Threads bioIn an update this week, Meta announced that Threads will now allow users to place up to 5 links in their bio. For comparison, X only allows users to post a single link in the "website" section of a user's profile page.  Credit: Meta By allowing multiple links, Threads allows users to share more of their personal websites and projects with people on the platform. This might also help users circumvent the need to use link-in-bio services like Linktree to share multiple links on their Threads profile page. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Threads link analyticsFurthermore, Meta wants its users to know just how much traffic Threads sends from links posted on the platform. Credit: Meta The company announced that Threads will now profile users with link analytics that present data regarding the number of clicks that links receive via Threads. The platform will provide users with this information for links posted to both a user's bio and in their posts.Link recommendations in ThreadsWhile Instagram head Adam Mosseri previously claimed Threads didn't specifically downrank posts with links, Mosseri also said the platform didn't place much value on them either.  Related Stories That's about to change, too, according to Engadget. Threads will now start promoting posts that include links at a higher rate than before via its recommendation algorithm.Meta has started prioritizing content creators on all of its platforms in recent months. Prioritizing links seems like a big step in that direction as creators look to use social media platforms to promote their work, regardless of where it's posted.We'll soon find out if these changes help Threads win over creators over its competitors like Musk's X and Bluesky. Topics X/Twitter Meta
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • eero 7

    Pros
    Easy to installMulti-gig WAN/LANSupports Matter, Thread, and ZigbeeDecent performance

    Cons
    Lacks 6GHz bandDoes not support 320MHz channelsNo USB portsParental control and network security software cost extra

    eero 7 Specs

    Anti-Malware Tools

    Coverage Area for Hardware as Tested
    6000

    IPv6 Compatible

    MU-MIMO

    Number of Antennas
    3

    Number of Bands
    2

    Number of Nodes
    3

    Number of Wired LAN Ports1 on router, 2 on node

    Parental Controls

    Quality of ServiceSecurity
    WPA2

    Security
    WPA3

    Wi-Fi SpeedBE5000

    Wired Backhaul

    Wireless Specification
    802.11be

    All Specs

    The latest addition to Amazon’s eero family of whole-home mesh systems, the eero 7, is the company’s most affordable Wi-Fi 7 offering to date. For you get a three-piece system that offers 6,000 square feet of coverage, two 2.5GbE networking ports per node, and support for the latest smart home technologies, including Matter, Thread, and Zigbee. It delivered goodthroughput in testing and is very easy to install, but it lacks 6GHz transmissions, and parental control and network security tools are locked behind a paywall. You’ll get much better performance, 6GHz transmissions, free basic parental control and network security software, and USB connectivity with our Editors’ Choice winner for Wi-Fi 7 mesh systems, the significantly more expensive Asus ZenWiFi BQ16 Pro.Design and Specs: Three Nodes Are Enough for Large HomesThe eero 7 nodes are identical and have the same curvy shape and white finish as the eero 6+ nodes, but at 5.1 by 5.1 by 2.5 inches, they are slightly larger.The three-pack reviewed here provides 6,000 square feet of coverage, but if you have a smaller dwelling, you can order a two-pack for which gives you 4,000 square feet of coverage, or a single node for which gives you 2,000 square feet of coverage.A small LED indicator on the front of each node glows white when everything is connected and working properly, flashes white during setup, flashes blue when connecting to the app via Bluetooth, flashes green during a firmware update, and is solid red when the node has gone offline.Around back are two 2.5GbE networking portsand a USB-C power port. Wired backhaul is supported. Missing are the USB data ports that you’ll find on the TP-Link Deco BE63.The eero 7 has a 1.1 GHz A53 ARM processor, 1GB of RAM, and 4GB of flash memory. It’s a dual-band BE5000 system capable of speeds of up to 688Mbps on the 2.4GHz band and up to 4,324Mbps on the 5GHz band. As with the TP-Link Deco BE25 and the MSI Roamii BE Lite systems, the eero 7 does not offer a 6GHz radio band and therefore does not support 320MHz channels. It does, however, support 240MHz channels as well as other Wi-Fi 7 technologies, including direct-to-client beamforming, Multi-Link Operation, Orthogonal Frequency-Division Multiple Accesstransmissions, and WPA3 encryption. Additionally, this system contains a Zigbee radio and serves as a Thread border router and a Matter controller, making it ideal for controlling home automation devices. It also supports Alexa voice commands.You manage the eero 7 with the same user-friendly mobile app as the eero Max 7 and eero Outdoor 7. The Home screen displays the name of the network and contains an Internet tab and tabs for each node. Tap the Internet tab to run an internet speed test and tap any node to see the node’s IP address, which clients are connected to it, which band each client is using, and if it is a wired or wireless connection. Below the node tabs are tabs for each client device. When you tap a client tab, you come to a screen where you can pause internet access, completely block it, configure IPv4 reservation and Port Forwarding rules, enable Client Steering, and enable MLO connections. Here you can also create user profiles, but if you want to assign parental controls to any profile, you’ll have to subscribe to an eero Plus plan, which unlocks age-based content filters, malware protection, VPN services, password management, and more. New users currently get a free two-month trial, but once it expires, it’ll cost you per month or per year.Recommended by Our EditorsAt the very bottom of the screen are Activity, Devices, Home, and Settings buttons. The Home button brings you back to the Home screen, and the Devices button opens a screen where you can view information about connected and recently connected clients. The Activity button opens a screen with upload and download speeds and uploaded and downloaded data statistics. Finally, tap the Settings button to open a screen where you can manage your account, view and share login information, enable guest networking, and configure network settings and network notifications.Setup and Performance: Decent Speed, User-Friendly AppInstalling the eero 7 is a breeze. You’ll have to download the mobile app and create an account to get started. I started by tapping Setup on the Welcome screen and followed the instructions to power down my modem. I connected an eero node to the modem, powered up both devices, and allowed Bluetooth communications. Once the eero node was found, I gave it a location, a name, and a password. The network was up and running within seconds. I tapped Next and followed the instructions to add another node. I placed the satellite nodes in their respective locations, plugged them in, gave them names, and tapped Finish Setup. After a firmware update, the installation was complete.The eero 7 delivered fairly good throughput performance in testing, but it’s certainly not the fastest mesh system out there. The router node’s score of 1,101Mbps on the close proximity test was faster than the MSI Roamii BE Lite routerbut significantly slower than the TP-Link Deco BE5000 router. The TP-Link Deco BE63, which employs the 6GHz band with 320MHz channels, led the pack with a score of 2,288Mbps. At a distance of 30 feet, the eero 7 router delivered 586Mbps, once again besting the Roamii BE Lite routerbut not the Deco BE5000or Deco 63routers.The eero 7 satellite node managed 745Mbps on the close proximity test and 513Mbps on the 30-foot test. In comparison, the Roamii BE Lite node scored 561Mbps and 441Mbps, respectively; the Deco BE5000 node scored 982Mbps and 630Mbps, respectively; and the Deco 63 node scored 1,688Mbps and 950Mbps, respectively.We test wireless signal strength using an Ekahau Sidekick 2 diagnostic device paired with the company's Survey software.This combination generates a heat map that displays Wi-Fi signal strength throughout our test home. The circles on the heat map represent the router and node locations, and the colors represent signal strength, with dark green representing the strongest signal, lighter yellow a weaker one, and gray representing a very weak or no measurable signal.As illustrated on the map, the eero 7 had no trouble broadcasting a strong Wi-Fi signal to all corners of our test home.
    #eero
    eero 7
    Pros Easy to installMulti-gig WAN/LANSupports Matter, Thread, and ZigbeeDecent performance Cons Lacks 6GHz bandDoes not support 320MHz channelsNo USB portsParental control and network security software cost extra eero 7 Specs Anti-Malware Tools Coverage Area for Hardware as Tested 6000 IPv6 Compatible MU-MIMO Number of Antennas 3 Number of Bands 2 Number of Nodes 3 Number of Wired LAN Ports1 on router, 2 on node Parental Controls Quality of ServiceSecurity WPA2 Security WPA3 Wi-Fi SpeedBE5000 Wired Backhaul Wireless Specification 802.11be All Specs The latest addition to Amazon’s eero family of whole-home mesh systems, the eero 7, is the company’s most affordable Wi-Fi 7 offering to date. For you get a three-piece system that offers 6,000 square feet of coverage, two 2.5GbE networking ports per node, and support for the latest smart home technologies, including Matter, Thread, and Zigbee. It delivered goodthroughput in testing and is very easy to install, but it lacks 6GHz transmissions, and parental control and network security tools are locked behind a paywall. You’ll get much better performance, 6GHz transmissions, free basic parental control and network security software, and USB connectivity with our Editors’ Choice winner for Wi-Fi 7 mesh systems, the significantly more expensive Asus ZenWiFi BQ16 Pro.Design and Specs: Three Nodes Are Enough for Large HomesThe eero 7 nodes are identical and have the same curvy shape and white finish as the eero 6+ nodes, but at 5.1 by 5.1 by 2.5 inches, they are slightly larger.The three-pack reviewed here provides 6,000 square feet of coverage, but if you have a smaller dwelling, you can order a two-pack for which gives you 4,000 square feet of coverage, or a single node for which gives you 2,000 square feet of coverage.A small LED indicator on the front of each node glows white when everything is connected and working properly, flashes white during setup, flashes blue when connecting to the app via Bluetooth, flashes green during a firmware update, and is solid red when the node has gone offline.Around back are two 2.5GbE networking portsand a USB-C power port. Wired backhaul is supported. Missing are the USB data ports that you’ll find on the TP-Link Deco BE63.The eero 7 has a 1.1 GHz A53 ARM processor, 1GB of RAM, and 4GB of flash memory. It’s a dual-band BE5000 system capable of speeds of up to 688Mbps on the 2.4GHz band and up to 4,324Mbps on the 5GHz band. As with the TP-Link Deco BE25 and the MSI Roamii BE Lite systems, the eero 7 does not offer a 6GHz radio band and therefore does not support 320MHz channels. It does, however, support 240MHz channels as well as other Wi-Fi 7 technologies, including direct-to-client beamforming, Multi-Link Operation, Orthogonal Frequency-Division Multiple Accesstransmissions, and WPA3 encryption. Additionally, this system contains a Zigbee radio and serves as a Thread border router and a Matter controller, making it ideal for controlling home automation devices. It also supports Alexa voice commands.You manage the eero 7 with the same user-friendly mobile app as the eero Max 7 and eero Outdoor 7. The Home screen displays the name of the network and contains an Internet tab and tabs for each node. Tap the Internet tab to run an internet speed test and tap any node to see the node’s IP address, which clients are connected to it, which band each client is using, and if it is a wired or wireless connection. Below the node tabs are tabs for each client device. When you tap a client tab, you come to a screen where you can pause internet access, completely block it, configure IPv4 reservation and Port Forwarding rules, enable Client Steering, and enable MLO connections. Here you can also create user profiles, but if you want to assign parental controls to any profile, you’ll have to subscribe to an eero Plus plan, which unlocks age-based content filters, malware protection, VPN services, password management, and more. New users currently get a free two-month trial, but once it expires, it’ll cost you per month or per year.Recommended by Our EditorsAt the very bottom of the screen are Activity, Devices, Home, and Settings buttons. The Home button brings you back to the Home screen, and the Devices button opens a screen where you can view information about connected and recently connected clients. The Activity button opens a screen with upload and download speeds and uploaded and downloaded data statistics. Finally, tap the Settings button to open a screen where you can manage your account, view and share login information, enable guest networking, and configure network settings and network notifications.Setup and Performance: Decent Speed, User-Friendly AppInstalling the eero 7 is a breeze. You’ll have to download the mobile app and create an account to get started. I started by tapping Setup on the Welcome screen and followed the instructions to power down my modem. I connected an eero node to the modem, powered up both devices, and allowed Bluetooth communications. Once the eero node was found, I gave it a location, a name, and a password. The network was up and running within seconds. I tapped Next and followed the instructions to add another node. I placed the satellite nodes in their respective locations, plugged them in, gave them names, and tapped Finish Setup. After a firmware update, the installation was complete.The eero 7 delivered fairly good throughput performance in testing, but it’s certainly not the fastest mesh system out there. The router node’s score of 1,101Mbps on the close proximity test was faster than the MSI Roamii BE Lite routerbut significantly slower than the TP-Link Deco BE5000 router. The TP-Link Deco BE63, which employs the 6GHz band with 320MHz channels, led the pack with a score of 2,288Mbps. At a distance of 30 feet, the eero 7 router delivered 586Mbps, once again besting the Roamii BE Lite routerbut not the Deco BE5000or Deco 63routers.The eero 7 satellite node managed 745Mbps on the close proximity test and 513Mbps on the 30-foot test. In comparison, the Roamii BE Lite node scored 561Mbps and 441Mbps, respectively; the Deco BE5000 node scored 982Mbps and 630Mbps, respectively; and the Deco 63 node scored 1,688Mbps and 950Mbps, respectively.We test wireless signal strength using an Ekahau Sidekick 2 diagnostic device paired with the company's Survey software.This combination generates a heat map that displays Wi-Fi signal strength throughout our test home. The circles on the heat map represent the router and node locations, and the colors represent signal strength, with dark green representing the strongest signal, lighter yellow a weaker one, and gray representing a very weak or no measurable signal.As illustrated on the map, the eero 7 had no trouble broadcasting a strong Wi-Fi signal to all corners of our test home. #eero
    eero 7
    me.pcmag.com
    Pros Easy to installMulti-gig WAN/LANSupports Matter, Thread, and ZigbeeDecent performance Cons Lacks 6GHz bandDoes not support 320MHz channelsNo USB portsParental control and network security software cost extra eero 7 Specs Anti-Malware Tools Coverage Area for Hardware as Tested 6000 IPv6 Compatible MU-MIMO Number of Antennas 3 Number of Bands 2 Number of Nodes 3 Number of Wired LAN Ports (Excluding WAN Port) 1 on router, 2 on node Parental Controls Quality of Service (QoS) Security WPA2 Security WPA3 Wi-Fi Speed (Total Rated Throughput) BE5000 Wired Backhaul Wireless Specification 802.11be All Specs The latest addition to Amazon’s eero family of whole-home mesh systems, the eero 7, is the company’s most affordable Wi-Fi 7 offering to date. For $349.99, you get a three-piece system that offers 6,000 square feet of coverage, two 2.5GbE networking ports per node, and support for the latest smart home technologies, including Matter, Thread, and Zigbee. It delivered good (but not great) throughput in testing and is very easy to install, but it lacks 6GHz transmissions, and parental control and network security tools are locked behind a paywall. You’ll get much better performance, 6GHz transmissions, free basic parental control and network security software, and USB connectivity with our Editors’ Choice winner for Wi-Fi 7 mesh systems, the significantly more expensive Asus ZenWiFi BQ16 Pro.Design and Specs: Three Nodes Are Enough for Large HomesThe eero 7 nodes are identical and have the same curvy shape and white finish as the eero 6+ nodes, but at 5.1 by 5.1 by 2.5 inches (HWD), they are slightly larger. (The eero 6+ nodes measure 2.6 by 3.9 by 3.8 inches.) The three-pack reviewed here provides 6,000 square feet of coverage, but if you have a smaller dwelling, you can order a two-pack for $279.99, which gives you 4,000 square feet of coverage, or a single node for $169.99, which gives you 2,000 square feet of coverage.A small LED indicator on the front of each node glows white when everything is connected and working properly, flashes white during setup, flashes blue when connecting to the app via Bluetooth, flashes green during a firmware update, and is solid red when the node has gone offline.Around back are two 2.5GbE networking ports (one serves as a WAN port for the router node) and a USB-C power port. Wired backhaul is supported. Missing are the USB data ports that you’ll find on the TP-Link Deco BE63.(Credit: Joseph Maldonado)The eero 7 has a 1.1 GHz A53 ARM processor, 1GB of RAM, and 4GB of flash memory. It’s a dual-band BE5000 system capable of speeds of up to 688Mbps on the 2.4GHz band and up to 4,324Mbps on the 5GHz band. As with the TP-Link Deco BE25 and the MSI Roamii BE Lite systems, the eero 7 does not offer a 6GHz radio band and therefore does not support 320MHz channels. It does, however, support 240MHz channels as well as other Wi-Fi 7 technologies, including direct-to-client beamforming, Multi-Link Operation (MLO), Orthogonal Frequency-Division Multiple Access (OFDMA) transmissions, and WPA3 encryption. Additionally, this system contains a Zigbee radio and serves as a Thread border router and a Matter controller, making it ideal for controlling home automation devices. It also supports Alexa voice commands.You manage the eero 7 with the same user-friendly mobile app as the eero Max 7 and eero Outdoor 7. The Home screen displays the name of the network and contains an Internet tab and tabs for each node. Tap the Internet tab to run an internet speed test and tap any node to see the node’s IP address, which clients are connected to it, which band each client is using, and if it is a wired or wireless connection. Below the node tabs are tabs for each client device. When you tap a client tab, you come to a screen where you can pause internet access, completely block it, configure IPv4 reservation and Port Forwarding rules, enable Client Steering, and enable MLO connections. Here you can also create user profiles, but if you want to assign parental controls to any profile, you’ll have to subscribe to an eero Plus plan, which unlocks age-based content filters, malware protection, VPN services, password management, and more. New users currently get a free two-month trial, but once it expires, it’ll cost you $9.99 per month or $99.99 per year.Recommended by Our Editors(Credit: Joseph Maldonado)At the very bottom of the screen are Activity, Devices, Home, and Settings buttons. The Home button brings you back to the Home screen, and the Devices button opens a screen where you can view information about connected and recently connected clients. The Activity button opens a screen with upload and download speeds and uploaded and downloaded data statistics. Finally, tap the Settings button to open a screen where you can manage your account, view and share login information, enable guest networking, and configure network settings and network notifications.Setup and Performance: Decent Speed, User-Friendly AppInstalling the eero 7 is a breeze. You’ll have to download the mobile app and create an account to get started. I started by tapping Setup on the Welcome screen and followed the instructions to power down my modem. I connected an eero node to the modem, powered up both devices, and allowed Bluetooth communications. Once the eero node was found, I gave it a location, a name, and a password. The network was up and running within seconds. I tapped Next and followed the instructions to add another node. I placed the satellite nodes in their respective locations, plugged them in, gave them names, and tapped Finish Setup. After a firmware update, the installation was complete.The eero 7 delivered fairly good throughput performance in testing, but it’s certainly not the fastest mesh system out there. The router node’s score of 1,101Mbps on the close proximity test was faster than the MSI Roamii BE Lite router (937Mbps) but significantly slower than the TP-Link Deco BE5000 router (1,959Mbps). The TP-Link Deco BE63, which employs the 6GHz band with 320MHz channels, led the pack with a score of 2,288Mbps. At a distance of 30 feet, the eero 7 router delivered 586Mbps, once again besting the Roamii BE Lite router (524Mbps) but not the Deco BE5000 (628Mbps) or Deco 63 (780Mbps) routers.The eero 7 satellite node managed 745Mbps on the close proximity test and 513Mbps on the 30-foot test. In comparison, the Roamii BE Lite node scored 561Mbps and 441Mbps, respectively; the Deco BE5000 node scored 982Mbps and 630Mbps, respectively; and the Deco 63 node scored 1,688Mbps and 950Mbps, respectively.We test wireless signal strength using an Ekahau Sidekick 2 diagnostic device paired with the company's Survey software. (Disclosure: Ekahau is owned by PCMag.com's parent company, Ziff Davis. For more, read about our ethics policy in the Editorial Mission Statement.) This combination generates a heat map that displays Wi-Fi signal strength throughout our test home. The circles on the heat map represent the router and node locations, and the colors represent signal strength, with dark green representing the strongest signal, lighter yellow a weaker one, and gray representing a very weak or no measurable signal. (Credit: Ekahau)As illustrated on the map, the eero 7 had no trouble broadcasting a strong Wi-Fi signal to all corners of our test home.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • XXX Chromosome Diagnosis: What Happens Next?

    Health Tech 

    Rate this post

    Receiving a diagnosis related to genetics can be a life-changing moment, filled with uncertainty and questions. One such diagnosis that often comes unexpectedly is the presence of an XXX chromosome, also known as Triple X syndrome or Trisomy X. This genetic condition, exclusively affecting females, involves the presence of an extra X chromosome, resulting in a total of 47 chromosomes rather than the usual 46.
    For most people, the concept of having an extra chromosome might immediately bring concerns about health, development, or quality of life. However, the journey following an XXX chromosome diagnosis is often more nuanced and less severe than initially feared. In this article, we’ll explore what it means to be diagnosed with Triple X syndrome, what to expect after the diagnosis, and how individuals and families can navigate this genetic variation with confidence and clarity.

    Understanding the XXX Chromosome
    Every person is born with 23 pairs of chromosomes—22 pairs of autosomes and one pair of sex chromosomes. Typically, females have two X chromosomes, and males have one X and one Y. In Triple X syndrome, a female has three X chromosomesinstead of two. This condition is caused by a random error during the formation of the reproductive cells, known as nondisjunction.
    It’s estimated that 1 in 1,000 female births may have the XXX chromosome, but many cases go undiagnosed due to the absence of obvious symptoms.

    How Is XXX Chromosome Diagnosed?
    Most cases of Triple X syndrome are discovered in one of the following ways:

    Prenatal Testing: Through amniocentesis or chorionic villus sampling, genetic tests may reveal the presence of an extra X chromosome before birth.
    Developmental Delays: Some girls with XXX chromosome may show mild delays in motor skills, speech, or learning, prompting genetic testing.
    Infertility Evaluations or Other Medical Concerns: In adolescence or adulthood, some individuals undergo genetic testing due to irregular menstrual cycles, fertility issues, or other unexplained symptoms.

    The diagnosis is confirmed using karyotyping, a lab technique that maps chromosomes to identify abnormalities.

    What Happens After the Diagnosis?
    1. Meeting with a Genetic Counselor
    After a confirmed diagnosis, the first step is typically a consultation with a genetic counselor. These professionals explain the nature of the condition, what caused it, and the potential implications for the individual and the family. They also provide emotional support and answer any pressing concerns.
    2. Monitoring Developmental Milestones
    In childhood, girls with the XXX chromosome may experience:

    Slight delays in speech or language development
    Learning difficultiesCoordination issues
    Taller-than-average height

    It’s important to remember that intelligence in most girls with XXX chromosome falls within the normal range, although some may require additional educational support.
    Early intervention programs, speech therapy, and occupational therapy can make a significant difference in helping children meet their developmental milestones.
    3. Regular Health Checkups
    While most females with the XXX chromosome are healthy and live normal lives, doctors may recommend monitoring certain aspects of physical and emotional health, such as:

    Muscle tone and motor development
    Puberty progression
    Menstrual cycle regularity
    Fertility status later in life
    Emotional or behavioral challenges like anxiety or low self-esteem

    With proper support, the majority of females with Triple X syndrome develop typically and may not require extensive medical intervention.
    4. Education and Support Services
    Some girls may benefit from Individualized Education Programsor 504 Plans in school, ensuring that they receive personalized learning accommodations. Parents are encouraged to advocate for their child’s needs and work closely with educators.
    5. Emotional Well-being and Counseling
    Because of the potential for social or emotional difficulties, some children or teens may benefit from psychological counseling. Addressing issues like self-esteem, social skills, or anxiety can significantly improve their quality of life.
    For families, support groups and online communities offer a valuable space to share experiences, learn from others, and build a support network.

    Long-Term Outlook
    The prognosis for females with the XXX chromosome is generally excellent. Most lead full, independent lives and experience normal fertility. In fact, many go through life never knowing they have Triple X syndrome unless diagnosed through genetic testing.
    There is no cure for the XXX chromosome condition—nor is one needed in most cases. The focus is on supportive care to manage any developmental, educational, or emotional challenges that may arise.

    Social Implications and Misconceptions
    Due to limited public awareness, some misconceptions surround the XXX chromosome. For instance, there’s no evidence to support the idea that Triple X syndrome leads to aggression or intellectual disability. These outdated notions stem from flawed studies conducted in the 1960s and have since been disproven.
    Education, compassion, and scientific understanding are key to erasing stigma and supporting individuals with this condition.

    Conclusion
    A diagnosis of the XXX chromosome or Triple X syndrome can initially be unsettling, but the reality is that most females with this condition live healthy, fulfilling lives. With early diagnosis, appropriate developmental support, and ongoing health monitoring, challenges can be addressed effectively.
    For families navigating this journey, the focus should be on understanding, advocacy, and ensuring that each individual with Triple X has access to the tools they need to thrive.

    FAQs:
    Q1. What causes the XXX chromosome?
    Triple X syndrome occurs due to a random error during cell division, leading to an extra X chromosome in each cell.Q2. Can the XXX chromosome be inherited?
    No, it is not typically inherited. It is usually a spontaneous genetic change.Q3. Do all females with Triple X show symptoms?
    No. Many females with Triple X syndrome have no noticeable symptoms and may never be diagnosed.Q4. Can females with the XXX chromosome have children?
    Yes, most females with Triple X have normal fertility and can conceive without complications.Q5. Is treatment necessary for Triple X syndrome?
    There is no specific treatment, but supportive therapiesmay be beneficial if challenges arise.Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #xxx #chromosome #diagnosis #what #happens
    XXX Chromosome Diagnosis: What Happens Next?
    Health Tech  Rate this post Receiving a diagnosis related to genetics can be a life-changing moment, filled with uncertainty and questions. One such diagnosis that often comes unexpectedly is the presence of an XXX chromosome, also known as Triple X syndrome or Trisomy X. This genetic condition, exclusively affecting females, involves the presence of an extra X chromosome, resulting in a total of 47 chromosomes rather than the usual 46. For most people, the concept of having an extra chromosome might immediately bring concerns about health, development, or quality of life. However, the journey following an XXX chromosome diagnosis is often more nuanced and less severe than initially feared. In this article, we’ll explore what it means to be diagnosed with Triple X syndrome, what to expect after the diagnosis, and how individuals and families can navigate this genetic variation with confidence and clarity. Understanding the XXX Chromosome Every person is born with 23 pairs of chromosomes—22 pairs of autosomes and one pair of sex chromosomes. Typically, females have two X chromosomes, and males have one X and one Y. In Triple X syndrome, a female has three X chromosomesinstead of two. This condition is caused by a random error during the formation of the reproductive cells, known as nondisjunction. It’s estimated that 1 in 1,000 female births may have the XXX chromosome, but many cases go undiagnosed due to the absence of obvious symptoms. How Is XXX Chromosome Diagnosed? Most cases of Triple X syndrome are discovered in one of the following ways: Prenatal Testing: Through amniocentesis or chorionic villus sampling, genetic tests may reveal the presence of an extra X chromosome before birth. Developmental Delays: Some girls with XXX chromosome may show mild delays in motor skills, speech, or learning, prompting genetic testing. Infertility Evaluations or Other Medical Concerns: In adolescence or adulthood, some individuals undergo genetic testing due to irregular menstrual cycles, fertility issues, or other unexplained symptoms. The diagnosis is confirmed using karyotyping, a lab technique that maps chromosomes to identify abnormalities. What Happens After the Diagnosis? 1. Meeting with a Genetic Counselor After a confirmed diagnosis, the first step is typically a consultation with a genetic counselor. These professionals explain the nature of the condition, what caused it, and the potential implications for the individual and the family. They also provide emotional support and answer any pressing concerns. 2. Monitoring Developmental Milestones In childhood, girls with the XXX chromosome may experience: Slight delays in speech or language development Learning difficultiesCoordination issues Taller-than-average height It’s important to remember that intelligence in most girls with XXX chromosome falls within the normal range, although some may require additional educational support. Early intervention programs, speech therapy, and occupational therapy can make a significant difference in helping children meet their developmental milestones. 3. Regular Health Checkups While most females with the XXX chromosome are healthy and live normal lives, doctors may recommend monitoring certain aspects of physical and emotional health, such as: Muscle tone and motor development Puberty progression Menstrual cycle regularity Fertility status later in life Emotional or behavioral challenges like anxiety or low self-esteem With proper support, the majority of females with Triple X syndrome develop typically and may not require extensive medical intervention. 4. Education and Support Services Some girls may benefit from Individualized Education Programsor 504 Plans in school, ensuring that they receive personalized learning accommodations. Parents are encouraged to advocate for their child’s needs and work closely with educators. 5. Emotional Well-being and Counseling Because of the potential for social or emotional difficulties, some children or teens may benefit from psychological counseling. Addressing issues like self-esteem, social skills, or anxiety can significantly improve their quality of life. For families, support groups and online communities offer a valuable space to share experiences, learn from others, and build a support network. Long-Term Outlook The prognosis for females with the XXX chromosome is generally excellent. Most lead full, independent lives and experience normal fertility. In fact, many go through life never knowing they have Triple X syndrome unless diagnosed through genetic testing. There is no cure for the XXX chromosome condition—nor is one needed in most cases. The focus is on supportive care to manage any developmental, educational, or emotional challenges that may arise. Social Implications and Misconceptions Due to limited public awareness, some misconceptions surround the XXX chromosome. For instance, there’s no evidence to support the idea that Triple X syndrome leads to aggression or intellectual disability. These outdated notions stem from flawed studies conducted in the 1960s and have since been disproven. Education, compassion, and scientific understanding are key to erasing stigma and supporting individuals with this condition. Conclusion A diagnosis of the XXX chromosome or Triple X syndrome can initially be unsettling, but the reality is that most females with this condition live healthy, fulfilling lives. With early diagnosis, appropriate developmental support, and ongoing health monitoring, challenges can be addressed effectively. For families navigating this journey, the focus should be on understanding, advocacy, and ensuring that each individual with Triple X has access to the tools they need to thrive. FAQs: Q1. What causes the XXX chromosome? Triple X syndrome occurs due to a random error during cell division, leading to an extra X chromosome in each cell.Q2. Can the XXX chromosome be inherited? No, it is not typically inherited. It is usually a spontaneous genetic change.Q3. Do all females with Triple X show symptoms? No. Many females with Triple X syndrome have no noticeable symptoms and may never be diagnosed.Q4. Can females with the XXX chromosome have children? Yes, most females with Triple X have normal fertility and can conceive without complications.Q5. Is treatment necessary for Triple X syndrome? There is no specific treatment, but supportive therapiesmay be beneficial if challenges arise.Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #xxx #chromosome #diagnosis #what #happens
    XXX Chromosome Diagnosis: What Happens Next?
    techworldtimes.com
    Health Tech  Rate this post Receiving a diagnosis related to genetics can be a life-changing moment, filled with uncertainty and questions. One such diagnosis that often comes unexpectedly is the presence of an XXX chromosome, also known as Triple X syndrome or Trisomy X. This genetic condition, exclusively affecting females, involves the presence of an extra X chromosome, resulting in a total of 47 chromosomes rather than the usual 46. For most people, the concept of having an extra chromosome might immediately bring concerns about health, development, or quality of life. However, the journey following an XXX chromosome diagnosis is often more nuanced and less severe than initially feared. In this article, we’ll explore what it means to be diagnosed with Triple X syndrome, what to expect after the diagnosis, and how individuals and families can navigate this genetic variation with confidence and clarity. Understanding the XXX Chromosome Every person is born with 23 pairs of chromosomes—22 pairs of autosomes and one pair of sex chromosomes. Typically, females have two X chromosomes (XX), and males have one X and one Y (XY). In Triple X syndrome, a female has three X chromosomes (XXX) instead of two. This condition is caused by a random error during the formation of the reproductive cells, known as nondisjunction. It’s estimated that 1 in 1,000 female births may have the XXX chromosome, but many cases go undiagnosed due to the absence of obvious symptoms. How Is XXX Chromosome Diagnosed? Most cases of Triple X syndrome are discovered in one of the following ways: Prenatal Testing: Through amniocentesis or chorionic villus sampling (CVS), genetic tests may reveal the presence of an extra X chromosome before birth. Developmental Delays: Some girls with XXX chromosome may show mild delays in motor skills, speech, or learning, prompting genetic testing. Infertility Evaluations or Other Medical Concerns: In adolescence or adulthood, some individuals undergo genetic testing due to irregular menstrual cycles, fertility issues, or other unexplained symptoms. The diagnosis is confirmed using karyotyping, a lab technique that maps chromosomes to identify abnormalities. What Happens After the Diagnosis? 1. Meeting with a Genetic Counselor After a confirmed diagnosis, the first step is typically a consultation with a genetic counselor. These professionals explain the nature of the condition, what caused it, and the potential implications for the individual and the family. They also provide emotional support and answer any pressing concerns. 2. Monitoring Developmental Milestones In childhood, girls with the XXX chromosome may experience: Slight delays in speech or language development Learning difficulties (especially in reading or math) Coordination issues Taller-than-average height It’s important to remember that intelligence in most girls with XXX chromosome falls within the normal range, although some may require additional educational support. Early intervention programs, speech therapy, and occupational therapy can make a significant difference in helping children meet their developmental milestones. 3. Regular Health Checkups While most females with the XXX chromosome are healthy and live normal lives, doctors may recommend monitoring certain aspects of physical and emotional health, such as: Muscle tone and motor development Puberty progression Menstrual cycle regularity Fertility status later in life Emotional or behavioral challenges like anxiety or low self-esteem With proper support, the majority of females with Triple X syndrome develop typically and may not require extensive medical intervention. 4. Education and Support Services Some girls may benefit from Individualized Education Programs (IEPs) or 504 Plans in school, ensuring that they receive personalized learning accommodations. Parents are encouraged to advocate for their child’s needs and work closely with educators. 5. Emotional Well-being and Counseling Because of the potential for social or emotional difficulties, some children or teens may benefit from psychological counseling. Addressing issues like self-esteem, social skills, or anxiety can significantly improve their quality of life. For families, support groups and online communities offer a valuable space to share experiences, learn from others, and build a support network. Long-Term Outlook The prognosis for females with the XXX chromosome is generally excellent. Most lead full, independent lives and experience normal fertility. In fact, many go through life never knowing they have Triple X syndrome unless diagnosed through genetic testing. There is no cure for the XXX chromosome condition—nor is one needed in most cases. The focus is on supportive care to manage any developmental, educational, or emotional challenges that may arise. Social Implications and Misconceptions Due to limited public awareness, some misconceptions surround the XXX chromosome. For instance, there’s no evidence to support the idea that Triple X syndrome leads to aggression or intellectual disability. These outdated notions stem from flawed studies conducted in the 1960s and have since been disproven. Education, compassion, and scientific understanding are key to erasing stigma and supporting individuals with this condition. Conclusion A diagnosis of the XXX chromosome or Triple X syndrome can initially be unsettling, but the reality is that most females with this condition live healthy, fulfilling lives. With early diagnosis, appropriate developmental support, and ongoing health monitoring, challenges can be addressed effectively. For families navigating this journey, the focus should be on understanding, advocacy, and ensuring that each individual with Triple X has access to the tools they need to thrive. FAQs: Q1. What causes the XXX chromosome? Triple X syndrome occurs due to a random error during cell division, leading to an extra X chromosome in each cell.Q2. Can the XXX chromosome be inherited? No, it is not typically inherited. It is usually a spontaneous genetic change.Q3. Do all females with Triple X show symptoms? No. Many females with Triple X syndrome have no noticeable symptoms and may never be diagnosed.Q4. Can females with the XXX chromosome have children? Yes, most females with Triple X have normal fertility and can conceive without complications.Q5. Is treatment necessary for Triple X syndrome? There is no specific treatment, but supportive therapies (like speech or educational help) may be beneficial if challenges arise.Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • This rugged little Chromebook is just $54.99 + free shipping

    Lenovo 11.6″ 100e Chromebook 2nd GenMediaTek MT8173C 4GB RAM 16GB eMMCTL;DR: You can grab a like-new Lenovo 11.6″ Chromebook for just —tough, travel-ready, and backed by a Grade A refurb rating—with free shipping.
    If you’ve ever worried about tossing your laptop into a backpack, spilling coffee on your keyboard, or watching your kid treat it like a frisbee, meet your low-stress companion: the refurb Lenovo 11.6″ 100e Chromebook 2nd Gen, now justwith free shipping.
    This Grade A refurbished Chromebook looks and feels nearly new, but at a fraction of the price. And with its rubber bumpers, reinforced ports, spill-resistant keyboard, and drop resistance up to 29.5 inches, it’s basically the stunt double of laptops—ready to take a hit and keep going.
    Under the hood, it runs on a MediaTek quad-core processor with Chrome OS, meaning you get decent performance for everyday tasks like email, Google Docs, video calls, and streaming—all with up to 10 hours of battery life. The anti-glare HD display and 720p camera make it ideal for travel, remote work, and Zoom catch-ups.

    Whether you need a backup device, travel laptop, or something the kids can use without stress, this Chromebook punches way above its price point. And at just you really can’t beat the value.

    Lenovo 11.6″ 100e Chromebook 2nd GenMediaTek MT8173C 4GB RAM 16GB eMMCSee Deal
    StackSocial prices subject to change.
    #this #rugged #little #chromebook #just
    This rugged little Chromebook is just $54.99 + free shipping
    Lenovo 11.6″ 100e Chromebook 2nd GenMediaTek MT8173C 4GB RAM 16GB eMMCTL;DR: You can grab a like-new Lenovo 11.6″ Chromebook for just —tough, travel-ready, and backed by a Grade A refurb rating—with free shipping. If you’ve ever worried about tossing your laptop into a backpack, spilling coffee on your keyboard, or watching your kid treat it like a frisbee, meet your low-stress companion: the refurb Lenovo 11.6″ 100e Chromebook 2nd Gen, now justwith free shipping. This Grade A refurbished Chromebook looks and feels nearly new, but at a fraction of the price. And with its rubber bumpers, reinforced ports, spill-resistant keyboard, and drop resistance up to 29.5 inches, it’s basically the stunt double of laptops—ready to take a hit and keep going. Under the hood, it runs on a MediaTek quad-core processor with Chrome OS, meaning you get decent performance for everyday tasks like email, Google Docs, video calls, and streaming—all with up to 10 hours of battery life. The anti-glare HD display and 720p camera make it ideal for travel, remote work, and Zoom catch-ups. Whether you need a backup device, travel laptop, or something the kids can use without stress, this Chromebook punches way above its price point. And at just you really can’t beat the value. Lenovo 11.6″ 100e Chromebook 2nd GenMediaTek MT8173C 4GB RAM 16GB eMMCSee Deal StackSocial prices subject to change. #this #rugged #little #chromebook #just
    This rugged little Chromebook is just $54.99 + free shipping
    www.pcworld.com
    Lenovo 11.6″ 100e Chromebook 2nd Gen (2019) MediaTek MT8173C 4GB RAM 16GB eMMC (Refurbished)TL;DR: You can grab a like-new Lenovo 11.6″ Chromebook for just $54.99—tough, travel-ready, and backed by a Grade A refurb rating—with free shipping. If you’ve ever worried about tossing your laptop into a backpack, spilling coffee on your keyboard, or watching your kid treat it like a frisbee, meet your low-stress companion: the refurb Lenovo 11.6″ 100e Chromebook 2nd Gen, now just $54.99 (regularly $328.99) with free shipping. This Grade A refurbished Chromebook looks and feels nearly new, but at a fraction of the price. And with its rubber bumpers, reinforced ports, spill-resistant keyboard, and drop resistance up to 29.5 inches, it’s basically the stunt double of laptops—ready to take a hit and keep going. Under the hood, it runs on a MediaTek quad-core processor with Chrome OS, meaning you get decent performance for everyday tasks like email, Google Docs, video calls, and streaming—all with up to 10 hours of battery life. The anti-glare HD display and 720p camera make it ideal for travel, remote work, and Zoom catch-ups. Whether you need a backup device, travel laptop, or something the kids can use without stress, this Chromebook punches way above its price point. And at just $54.99, you really can’t beat the value. Lenovo 11.6″ 100e Chromebook 2nd Gen (2019) MediaTek MT8173C 4GB RAM 16GB eMMC (Refurbished)See Deal StackSocial prices subject to change.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Alleged AMD RX 7500 prototype surfaces with 1,536 shaders and 6GB VRAM

    X user shares an alleged prototype of AMD's cancelled Radeon RX 7500 graphics card that appears to sport 1,536 shaders, 64 ROPS, and 6GB of onboard memory.
    #alleged #amd #prototype #surfaces #with
    Alleged AMD RX 7500 prototype surfaces with 1,536 shaders and 6GB VRAM
    X user shares an alleged prototype of AMD's cancelled Radeon RX 7500 graphics card that appears to sport 1,536 shaders, 64 ROPS, and 6GB of onboard memory. #alleged #amd #prototype #surfaces #with
    Alleged AMD RX 7500 prototype surfaces with 1,536 shaders and 6GB VRAM
    www.tomshardware.com
    X user shares an alleged prototype of AMD's cancelled Radeon RX 7500 graphics card that appears to sport 1,536 shaders, 64 ROPS, and 6GB of onboard memory.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Tails Linux introduces reforms in security audit postmortem to make you safer

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Tails Linux introduces reforms in security audit postmortem to make you safer

    Paul Hill

    Neowin
    @ziks_99 ·

    May 17, 2025 10:16 EDT

    Alongside the release of Tails 6.11 earlier this year, the Tails Project revealed that Radically Open Security was auditing the Tails operating system to better protect users. The audit has now concluded and no remote code vulnerabilities were found.
    The only issues that were found required a compromised low-privileged amnesia user, which is the default account in Tails. Luckily for users, the Tails developers are quick on their toes and asked for information about the vulnerabilities before the report was published and released fixes for the discovered issues, which users now already enjoy.
    Here’s an overview of what was fixed:

    ID
    Impact
    Description
    Issue
    Status
    Release

    OTF-001

    High

    Local privilege escalation in Tails Upgrader

    #20701
    Fixed
    6.11

    OTF-002

    High

    Arbitrary code execution in Python scripts

    #20702
    Fixed
    6.11

    #20744
    Fixed
    6.12

    OTF-003

    Moderate

    Argument injection in privileged GNOME scripts

    #20709
    Fixed
    6.11

    #20710
    Fixed
    6.11

    OTF-004

    Low

    Untrusted search path in Tor Browser launcher

    #20733
    Fixed
    6.12

    Following the fixing of the bugs, the Tails team also did a postmortem of the audit to find out what cultural things need to change and which technical things need to be changed that had a role in allowing the entry of bugs into the operating system in the first place.
    The major cultural change that Tails has adopted is how it shares vulnerabilities with the public. So far, it said it has been too secretive about vulnerabilities, but going forward, has adopted the security issue response policy based on the policy of the Tor Project’s Network Team.
    It also found that refactoring large amounts of code can also be a way in for security bugs so from now on it will be more intentional and only do large refactoring when it’s worth the effort and risk.
    For anyone running Tails, these are extremely positive developments. Tails is used by all sorts of people for sensitive work, so knowing that it’s being proactive on security is reassuring.
    Source: Tails

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #tails #linux #introduces #reforms #security
    Tails Linux introduces reforms in security audit postmortem to make you safer
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Tails Linux introduces reforms in security audit postmortem to make you safer Paul Hill Neowin @ziks_99 · May 17, 2025 10:16 EDT Alongside the release of Tails 6.11 earlier this year, the Tails Project revealed that Radically Open Security was auditing the Tails operating system to better protect users. The audit has now concluded and no remote code vulnerabilities were found. The only issues that were found required a compromised low-privileged amnesia user, which is the default account in Tails. Luckily for users, the Tails developers are quick on their toes and asked for information about the vulnerabilities before the report was published and released fixes for the discovered issues, which users now already enjoy. Here’s an overview of what was fixed: ID Impact Description Issue Status Release OTF-001 High Local privilege escalation in Tails Upgrader #20701 Fixed 6.11 OTF-002 High Arbitrary code execution in Python scripts #20702 Fixed 6.11 #20744 Fixed 6.12 OTF-003 Moderate Argument injection in privileged GNOME scripts #20709 Fixed 6.11 #20710 Fixed 6.11 OTF-004 Low Untrusted search path in Tor Browser launcher #20733 Fixed 6.12 Following the fixing of the bugs, the Tails team also did a postmortem of the audit to find out what cultural things need to change and which technical things need to be changed that had a role in allowing the entry of bugs into the operating system in the first place. The major cultural change that Tails has adopted is how it shares vulnerabilities with the public. So far, it said it has been too secretive about vulnerabilities, but going forward, has adopted the security issue response policy based on the policy of the Tor Project’s Network Team. It also found that refactoring large amounts of code can also be a way in for security bugs so from now on it will be more intentional and only do large refactoring when it’s worth the effort and risk. For anyone running Tails, these are extremely positive developments. Tails is used by all sorts of people for sensitive work, so knowing that it’s being proactive on security is reassuring. Source: Tails Tags Report a problem with article Follow @NeowinFeed #tails #linux #introduces #reforms #security
    Tails Linux introduces reforms in security audit postmortem to make you safer
    www.neowin.net
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Tails Linux introduces reforms in security audit postmortem to make you safer Paul Hill Neowin @ziks_99 · May 17, 2025 10:16 EDT Alongside the release of Tails 6.11 earlier this year, the Tails Project revealed that Radically Open Security was auditing the Tails operating system to better protect users. The audit has now concluded and no remote code vulnerabilities were found. The only issues that were found required a compromised low-privileged amnesia user, which is the default account in Tails. Luckily for users, the Tails developers are quick on their toes and asked for information about the vulnerabilities before the report was published and released fixes for the discovered issues, which users now already enjoy. Here’s an overview of what was fixed: ID Impact Description Issue Status Release OTF-001 High Local privilege escalation in Tails Upgrader #20701 Fixed 6.11 OTF-002 High Arbitrary code execution in Python scripts #20702 Fixed 6.11 #20744 Fixed 6.12 OTF-003 Moderate Argument injection in privileged GNOME scripts #20709 Fixed 6.11 #20710 Fixed 6.11 OTF-004 Low Untrusted search path in Tor Browser launcher #20733 Fixed 6.12 Following the fixing of the bugs, the Tails team also did a postmortem of the audit to find out what cultural things need to change and which technical things need to be changed that had a role in allowing the entry of bugs into the operating system in the first place. The major cultural change that Tails has adopted is how it shares vulnerabilities with the public. So far, it said it has been too secretive about vulnerabilities, but going forward, has adopted the security issue response policy based on the policy of the Tor Project’s Network Team. It also found that refactoring large amounts of code can also be a way in for security bugs so from now on it will be more intentional and only do large refactoring when it’s worth the effort and risk. For anyone running Tails, these are extremely positive developments. Tails is used by all sorts of people for sensitive work, so knowing that it’s being proactive on security is reassuring. Source: Tails Tags Report a problem with article Follow @NeowinFeed
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary

    Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure."

    It's a story with many actors:
    - The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications.
    - The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision.
    - The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure.
    - The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades.

    All these actors have a common interest in infrastructure.
    Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task."

    Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments...

    We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features.

    Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM", while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell."

    And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others."

    JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget.

    All this investment, despite the long time horizon, paid off. We're all better for it.

    He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatabilityAnd he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like.

    "Everyone involved should be proud of what they've built."

    of this story at Slashdot.
    #rust #creator #graydon #hoare #thanks
    Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary
    Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure." It's a story with many actors: - The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications. - The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision. - The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure. - The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades. All these actors have a common interest in infrastructure. Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task." Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments... We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features. Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM", while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell." And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others." JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget. All this investment, despite the long time horizon, paid off. We're all better for it. He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatabilityAnd he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like. "Everyone involved should be proud of what they've built." of this story at Slashdot. #rust #creator #graydon #hoare #thanks
    Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary
    developers.slashdot.org
    Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure." It's a story with many actors: - The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications. - The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision. - The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure. - The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades. All these actors have a common interest in infrastructure. Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task." Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments... We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features. Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM" (developed by researchers at UIUC and later funded by Apple, Qualcomm, Google, ARM, Huawei, and many other organizations), while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell." And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others." JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget. All this investment, despite the long time horizon, paid off. We're all better for it. He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatability (including "investment in ever-greater reliability technology, including the many emerging formal methods projects built on Rust.") And he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like. "Everyone involved should be proud of what they've built." Read more of this story at Slashdot.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
CGShares https://cgshares.com