• I had my baby at 48 through IVF. Being an older mom has so many benefits.

    Rene Byrd did IVF to have her baby.

    Courtesy of Rene Byrd

    2025-06-14T21:23:01Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Rene Byrd is a 49-year-old singer-songwriter in London who had her first baby at 48.
    She had held on to hope for a baby throughout her 40s, undergoing IVF for over two years.
    Being an older mom has had several benefits, like financial security and contentment.

    This as-told-to essay is based on a conversation with Rene Byrd. It has been edited for length and clarity.When I turned 40, I went on a seven-day retreat full of meditation and massage to fall in love with myself. I'm a strong believer that to find love, you first have to love yourself.I had wanted to settle down with someone and build a family, but it just hadn't happened. Three years prior, I had frozen my eggs because I knew that I wanted a family someday.On the retreat, I felt deep in my spirit that I would one day find my person and hold my child in my hands. I wouldn't give up hope.I met someone at a barReturning home, I continued dating, but it wasn't until a chance meeting at a bar that I finally found the man who would become my husband. I hadn't quite turned 41, and he was 34.I remember not wanting to scare him off by talking too much about my desire for kids, but we did have discussions about the future. When love started to bloom between the two of us, we started looking at what our options were for having a child together.After trying holistic methods to no avail, we decided to go down the IVF route. I'd heard horror stories about IVF — that it was never straightforward — but as I already had my eggs frozen, it was the best option for us at the time.I felt guilty for waiting so longTwo-and-a-half long years later, I was given the news from the IVF clinic — I was pregnant. I fell apart, phoning my husband to tell him we would be having a baby.

    Rene Byrd got pregnant at age 48 thanks to IVF.

    Courtesy of Rene Byrd

    Throughout my pregnancy, I remember being scared of what this new life as a mother would look like. I had little panic attacks considering how different life would be, as compared to the decades of life without a child. And then I felt guilty, telling myself I had waited so long for this. There was a lot of grappling with these thoughts until I realized my child would just be an extension of me.Once our little boy, Crue, was born in November 2024, I felt ready for his arrival in theory. Having spent years hearing from friends with children, I had an idea of what to expect. Even still, those early days were a lot to deal with. All these things were being thrown at me about what I should and shouldn't do with a baby.Being a mom in my late 40s has so many beautiful benefitsI joined online mother and baby communities and in-person baby groups, finding my tribe of mothers like me, ones that were "older."There is a stillness within me that grounds me as I take care of Crue. I have this playbook of mothering, developed from years of research and observation, that has given me assurance that even when things don't seem to be going to plan — like breastfeeding or sleeping — I was OK, and so was he.Having built up financial security, I didn't worry about how I was going to provide for a baby. Established in a career, I could plan for all baby-related expenses, including IVF.And since I had gotten so much out of my system in my younger years — corporate working, parties, nice restaurants — I felt content to settle in at home with my baby and husband. I never feel like I'm missing out.The only concern I've heard quietly whispered in different circles is that of my health. I know that as I get older, little issues with my body could pop up — issues that I might not have had as a younger mother. This has forced me to look after my body more than I ever have so that I can fully enjoy time with Crue as he gets older.Becoming a mother had always been a dream of mine. I trusted the process, holding on to hope, and although delayed, my dream finally came true.
    #had #baby #through #ivf #being
    I had my baby at 48 through IVF. Being an older mom has so many benefits.
    Rene Byrd did IVF to have her baby. Courtesy of Rene Byrd 2025-06-14T21:23:01Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Rene Byrd is a 49-year-old singer-songwriter in London who had her first baby at 48. She had held on to hope for a baby throughout her 40s, undergoing IVF for over two years. Being an older mom has had several benefits, like financial security and contentment. This as-told-to essay is based on a conversation with Rene Byrd. It has been edited for length and clarity.When I turned 40, I went on a seven-day retreat full of meditation and massage to fall in love with myself. I'm a strong believer that to find love, you first have to love yourself.I had wanted to settle down with someone and build a family, but it just hadn't happened. Three years prior, I had frozen my eggs because I knew that I wanted a family someday.On the retreat, I felt deep in my spirit that I would one day find my person and hold my child in my hands. I wouldn't give up hope.I met someone at a barReturning home, I continued dating, but it wasn't until a chance meeting at a bar that I finally found the man who would become my husband. I hadn't quite turned 41, and he was 34.I remember not wanting to scare him off by talking too much about my desire for kids, but we did have discussions about the future. When love started to bloom between the two of us, we started looking at what our options were for having a child together.After trying holistic methods to no avail, we decided to go down the IVF route. I'd heard horror stories about IVF — that it was never straightforward — but as I already had my eggs frozen, it was the best option for us at the time.I felt guilty for waiting so longTwo-and-a-half long years later, I was given the news from the IVF clinic — I was pregnant. I fell apart, phoning my husband to tell him we would be having a baby. Rene Byrd got pregnant at age 48 thanks to IVF. Courtesy of Rene Byrd Throughout my pregnancy, I remember being scared of what this new life as a mother would look like. I had little panic attacks considering how different life would be, as compared to the decades of life without a child. And then I felt guilty, telling myself I had waited so long for this. There was a lot of grappling with these thoughts until I realized my child would just be an extension of me.Once our little boy, Crue, was born in November 2024, I felt ready for his arrival in theory. Having spent years hearing from friends with children, I had an idea of what to expect. Even still, those early days were a lot to deal with. All these things were being thrown at me about what I should and shouldn't do with a baby.Being a mom in my late 40s has so many beautiful benefitsI joined online mother and baby communities and in-person baby groups, finding my tribe of mothers like me, ones that were "older."There is a stillness within me that grounds me as I take care of Crue. I have this playbook of mothering, developed from years of research and observation, that has given me assurance that even when things don't seem to be going to plan — like breastfeeding or sleeping — I was OK, and so was he.Having built up financial security, I didn't worry about how I was going to provide for a baby. Established in a career, I could plan for all baby-related expenses, including IVF.And since I had gotten so much out of my system in my younger years — corporate working, parties, nice restaurants — I felt content to settle in at home with my baby and husband. I never feel like I'm missing out.The only concern I've heard quietly whispered in different circles is that of my health. I know that as I get older, little issues with my body could pop up — issues that I might not have had as a younger mother. This has forced me to look after my body more than I ever have so that I can fully enjoy time with Crue as he gets older.Becoming a mother had always been a dream of mine. I trusted the process, holding on to hope, and although delayed, my dream finally came true. #had #baby #through #ivf #being
    WWW.BUSINESSINSIDER.COM
    I had my baby at 48 through IVF. Being an older mom has so many benefits.
    Rene Byrd did IVF to have her baby. Courtesy of Rene Byrd 2025-06-14T21:23:01Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Rene Byrd is a 49-year-old singer-songwriter in London who had her first baby at 48. She had held on to hope for a baby throughout her 40s, undergoing IVF for over two years. Being an older mom has had several benefits, like financial security and contentment. This as-told-to essay is based on a conversation with Rene Byrd. It has been edited for length and clarity.When I turned 40, I went on a seven-day retreat full of meditation and massage to fall in love with myself. I'm a strong believer that to find love, you first have to love yourself.I had wanted to settle down with someone and build a family, but it just hadn't happened. Three years prior, I had frozen my eggs because I knew that I wanted a family someday.On the retreat, I felt deep in my spirit that I would one day find my person and hold my child in my hands. I wouldn't give up hope.I met someone at a barReturning home, I continued dating, but it wasn't until a chance meeting at a bar that I finally found the man who would become my husband. I hadn't quite turned 41, and he was 34.I remember not wanting to scare him off by talking too much about my desire for kids, but we did have discussions about the future. When love started to bloom between the two of us, we started looking at what our options were for having a child together.After trying holistic methods to no avail, we decided to go down the IVF route. I'd heard horror stories about IVF — that it was never straightforward — but as I already had my eggs frozen, it was the best option for us at the time.I felt guilty for waiting so longTwo-and-a-half long years later, I was given the news from the IVF clinic — I was pregnant. I fell apart, phoning my husband to tell him we would be having a baby. Rene Byrd got pregnant at age 48 thanks to IVF. Courtesy of Rene Byrd Throughout my pregnancy, I remember being scared of what this new life as a mother would look like. I had little panic attacks considering how different life would be, as compared to the decades of life without a child. And then I felt guilty, telling myself I had waited so long for this. There was a lot of grappling with these thoughts until I realized my child would just be an extension of me.Once our little boy, Crue, was born in November 2024, I felt ready for his arrival in theory. Having spent years hearing from friends with children, I had an idea of what to expect. Even still, those early days were a lot to deal with. All these things were being thrown at me about what I should and shouldn't do with a baby.Being a mom in my late 40s has so many beautiful benefitsI joined online mother and baby communities and in-person baby groups, finding my tribe of mothers like me, ones that were "older."There is a stillness within me that grounds me as I take care of Crue. I have this playbook of mothering, developed from years of research and observation, that has given me assurance that even when things don't seem to be going to plan — like breastfeeding or sleeping — I was OK, and so was he.Having built up financial security, I didn't worry about how I was going to provide for a baby. Established in a career, I could plan for all baby-related expenses, including IVF.And since I had gotten so much out of my system in my younger years — corporate working, parties, nice restaurants — I felt content to settle in at home with my baby and husband. I never feel like I'm missing out.The only concern I've heard quietly whispered in different circles is that of my health. I know that as I get older, little issues with my body could pop up — issues that I might not have had as a younger mother. This has forced me to look after my body more than I ever have so that I can fully enjoy time with Crue as he gets older.Becoming a mother had always been a dream of mine. I trusted the process, holding on to hope, and although delayed, my dream finally came true.
    0 Commenti 0 condivisioni
  • The “online monkey torture video” arrests just keep coming

    monkey abuse

    The “online monkey torture video” arrests just keep coming

    Authorities continue the slow crackdown.

    Nate Anderson



    Jun 14, 2025 7:00 am

    |

    34

    Credit:

    Getty Images

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections.
    Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption.
    Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful.There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met.
    The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts.
    They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times.
    "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response.

    So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good voto do it."
    Arrests continue
    In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders.
    In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish.
    Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids.
    In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged.
    In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera."
    As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years.

    Nate Anderson
    Deputy Editor

    Nate Anderson
    Deputy Editor

    Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds.

    34 Comments
    #online #monkey #torture #video #arrests
    The “online monkey torture video” arrests just keep coming
    monkey abuse The “online monkey torture video” arrests just keep coming Authorities continue the slow crackdown. Nate Anderson – Jun 14, 2025 7:00 am | 34 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections. Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption. Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful.There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met. The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts. They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times. "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response. So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good voto do it." Arrests continue In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders. In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish. Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids. In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged. In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera." As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years. Nate Anderson Deputy Editor Nate Anderson Deputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 34 Comments #online #monkey #torture #video #arrests
    ARSTECHNICA.COM
    The “online monkey torture video” arrests just keep coming
    monkey abuse The “online monkey torture video” arrests just keep coming Authorities continue the slow crackdown. Nate Anderson – Jun 14, 2025 7:00 am | 34 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections. Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption. Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful. (The cases I have seen usually involve Indonesia; read my feature from last year to learn more about how these groups work.) There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met. The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts. They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere $40 to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times. "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response. So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good vo [videographer] to do it." Arrests continue In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders. In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish. Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids. In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged. In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera." As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years. Nate Anderson Deputy Editor Nate Anderson Deputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 34 Comments
    0 Commenti 0 condivisioni
  • What’s next for computer vision: An AI developer weighs in

    In this Q&A, get a glimpse into the future of artificial intelligenceand computer vision through the lens of longtime Unity user Gerard Espona, whose robot digital twin project was featured in the Made with Unity: AI series. Working as simulation lead at Luxonis, whose core technology makes it possible to embed human-level perception into robotics, Espona uses his years of experience in the industry to weigh in on the current state and anticipated progression of computer vision.During recent years, computer visionand AI have become the fastest-growing fields both in market size and industry adoption rate. Spatial CV and edge AI have been used to improve and automate repetitive tasks as well as complex processes.This new reality is thanks to the democratization of CV/AI. Increasingly affordable hardware access, including depth perception capability as well as improvements in machine learning, has enabled the deployment of real solutions on edge CV/AI systems.Spatial CV using edge AI enables depth-based applications to be deployed without the need of a data center service, and also allows the user to preserve privacy by processing images on the device itself.Along with more accessible hardware, software and machine learning workflows are undergoing important improvements. Although they are still very specialized and full of technical challenges, they have become much more accessible, offering tools that allow users to train their own models.Within the standard ML pipeline/workflow, large-scale edge computing and deployment can still pose issues. One of the biggest general challenges is to reduce the costs and timelines currently required to create and/or improve machine learning models on real-world applications. In other words, the challenge is how to manage all these devices to enable a smooth pipeline for continuous improvement.Also, the implicit limitations in terms of compute processing need extra effort on the final model deployed on the device. That said, embedded technology evolves really fast, and each iteration is a big leap in processing capabilities.Spatial CV/AI is a field that still requires a lot of specialization and systems. Workflows are often complicated and tedious due to numerous technical challenges, so a lot of time is devoted to smoothing out the workflow instead of focusing on value-added tasks.Creating datasets, annotating the images, preprocessing/augmentation process, training, deploying and closing the feedback loop for continuous improvement is a complex process. Each step of the workflow is technically difficult and usually involves time and financial cost, and more so for systems working in remote areas with limited connectivity.At Luxonis, we help our customers build and deploy solutions to solve and automate complex tasks at scale, so we’re facing all these issues directly. Our mission, “Robotic vision made simple,” provides not only great and affordable depth-capable hardware, but also a solid and smooth ML pipeline with synthetic datasets and simulation.Another important challenge is the work that needs to be done on the interpretability of models and the creation of datasets from an ethical, privacy and bias point of view.Last but not least, global chip supply issues are making it difficult to get the hardware into everybody’s hands.Data-centric AI is potentially useful when a working model is underperforming. Investing a large amount of time to optimize the model often leads to almost zero real improvement. Instead, with data-centric AI the investment is in analysis, cleaning and improving of the dataset.Usually when a model is underperforming, the issue is within the dataset itself, as there is not enough data for the model to outperform. This could be the result of two possible reasons: 1) the model needs a much larger amount of data, which is difficult to collect in the real world, or 2) the model doesn’t have enough examples of rare cases, which take a lot of time to happen in the real world.In both situations, synthetic datasets could help.Thanks to Unity’s computer vision tools, it is very easy to create photorealistic scenes and randomize elements like materials, light conditions and object placement. The tools come with common labels like 2D bounding boxes, 3D bounding boxes, semantic and instance segmentation, and even human body key points. Additionally, these can be easily extended with custom randomizers, labelers and annotations.Almost any task you want to automate or improve using edge CV/AI very likely involves detecting people for obvious safety and security reasons. It’s critical to guarantee user safety around autonomous systems or robots when they’re working, requiring models to be trained on data about humans.That means we need to capture a large amount of images, including information like poses and physical appearance, that are representative of the entire human population. This task raises some concerns about privacy, ethics and bias when starting to capture real human data to train the model.Fortunately, we can use synthetic datasets to mitigate some of these concerns using human 3D models and poses. A very good example is the work done by the Unity team with PeopleSansPeople.PeopleSansPeople is a human-centric synthetic dataset creator using 3D models and standard animations to randomize human body poses. Also, we can use a Unity project template, to which we add our own 3D models and poses to create our own human synthetic dataset.At Luxonis, we’re using this project as the basis for creating our own human synthetic dataset and training models. In general, we use Unity’s computer vision tools to create large and complex datasets with a high level of customization on labelers, annotations and randomizations. This allows our ML team to iterate faster with our customers, without needing to wait for real-world data collection and manual annotation.Since the introduction of transformer architecture, CV tasks are more accessible. Generative models like DALL-E 2 could also be used to create synthetic datasets, and NeRF as a neural approach to generate novel point of views of known objects and scenes. It’s clear all these innovations are catching the attention of audiences.On the other hand, having access to better annotation tools and model zoos and libraries with pre-trained, ready-to-use models are helping drive wide adoption.One key element contributing to the uptick in computer vision use is the fast evolution of vision processing unitsthat currently allow users to perform model inferences on deviceat 4 TOPS of processing power. The new generation of VPUs promises a big leap in capabilities, allowing even more complex CV/AI applications to be deployed on edge.Any application related to agriculture and farming always captures my attention. For example, there is now a cow tracking and monitoring CV/AI application using drones.Our thanks to Gerard for sharing his perspective with us – keep up with his latest thoughts on LinkedIn and Twitter. And, learn more about how Unity can help your team generate synthetic data to improve computer vision model training with Unity Computer Vision.
    #whats #next #computer #vision #developer
    What’s next for computer vision: An AI developer weighs in
    In this Q&A, get a glimpse into the future of artificial intelligenceand computer vision through the lens of longtime Unity user Gerard Espona, whose robot digital twin project was featured in the Made with Unity: AI series. Working as simulation lead at Luxonis, whose core technology makes it possible to embed human-level perception into robotics, Espona uses his years of experience in the industry to weigh in on the current state and anticipated progression of computer vision.During recent years, computer visionand AI have become the fastest-growing fields both in market size and industry adoption rate. Spatial CV and edge AI have been used to improve and automate repetitive tasks as well as complex processes.This new reality is thanks to the democratization of CV/AI. Increasingly affordable hardware access, including depth perception capability as well as improvements in machine learning, has enabled the deployment of real solutions on edge CV/AI systems.Spatial CV using edge AI enables depth-based applications to be deployed without the need of a data center service, and also allows the user to preserve privacy by processing images on the device itself.Along with more accessible hardware, software and machine learning workflows are undergoing important improvements. Although they are still very specialized and full of technical challenges, they have become much more accessible, offering tools that allow users to train their own models.Within the standard ML pipeline/workflow, large-scale edge computing and deployment can still pose issues. One of the biggest general challenges is to reduce the costs and timelines currently required to create and/or improve machine learning models on real-world applications. In other words, the challenge is how to manage all these devices to enable a smooth pipeline for continuous improvement.Also, the implicit limitations in terms of compute processing need extra effort on the final model deployed on the device. That said, embedded technology evolves really fast, and each iteration is a big leap in processing capabilities.Spatial CV/AI is a field that still requires a lot of specialization and systems. Workflows are often complicated and tedious due to numerous technical challenges, so a lot of time is devoted to smoothing out the workflow instead of focusing on value-added tasks.Creating datasets, annotating the images, preprocessing/augmentation process, training, deploying and closing the feedback loop for continuous improvement is a complex process. Each step of the workflow is technically difficult and usually involves time and financial cost, and more so for systems working in remote areas with limited connectivity.At Luxonis, we help our customers build and deploy solutions to solve and automate complex tasks at scale, so we’re facing all these issues directly. Our mission, “Robotic vision made simple,” provides not only great and affordable depth-capable hardware, but also a solid and smooth ML pipeline with synthetic datasets and simulation.Another important challenge is the work that needs to be done on the interpretability of models and the creation of datasets from an ethical, privacy and bias point of view.Last but not least, global chip supply issues are making it difficult to get the hardware into everybody’s hands.Data-centric AI is potentially useful when a working model is underperforming. Investing a large amount of time to optimize the model often leads to almost zero real improvement. Instead, with data-centric AI the investment is in analysis, cleaning and improving of the dataset.Usually when a model is underperforming, the issue is within the dataset itself, as there is not enough data for the model to outperform. This could be the result of two possible reasons: 1) the model needs a much larger amount of data, which is difficult to collect in the real world, or 2) the model doesn’t have enough examples of rare cases, which take a lot of time to happen in the real world.In both situations, synthetic datasets could help.Thanks to Unity’s computer vision tools, it is very easy to create photorealistic scenes and randomize elements like materials, light conditions and object placement. The tools come with common labels like 2D bounding boxes, 3D bounding boxes, semantic and instance segmentation, and even human body key points. Additionally, these can be easily extended with custom randomizers, labelers and annotations.Almost any task you want to automate or improve using edge CV/AI very likely involves detecting people for obvious safety and security reasons. It’s critical to guarantee user safety around autonomous systems or robots when they’re working, requiring models to be trained on data about humans.That means we need to capture a large amount of images, including information like poses and physical appearance, that are representative of the entire human population. This task raises some concerns about privacy, ethics and bias when starting to capture real human data to train the model.Fortunately, we can use synthetic datasets to mitigate some of these concerns using human 3D models and poses. A very good example is the work done by the Unity team with PeopleSansPeople.PeopleSansPeople is a human-centric synthetic dataset creator using 3D models and standard animations to randomize human body poses. Also, we can use a Unity project template, to which we add our own 3D models and poses to create our own human synthetic dataset.At Luxonis, we’re using this project as the basis for creating our own human synthetic dataset and training models. In general, we use Unity’s computer vision tools to create large and complex datasets with a high level of customization on labelers, annotations and randomizations. This allows our ML team to iterate faster with our customers, without needing to wait for real-world data collection and manual annotation.Since the introduction of transformer architecture, CV tasks are more accessible. Generative models like DALL-E 2 could also be used to create synthetic datasets, and NeRF as a neural approach to generate novel point of views of known objects and scenes. It’s clear all these innovations are catching the attention of audiences.On the other hand, having access to better annotation tools and model zoos and libraries with pre-trained, ready-to-use models are helping drive wide adoption.One key element contributing to the uptick in computer vision use is the fast evolution of vision processing unitsthat currently allow users to perform model inferences on deviceat 4 TOPS of processing power. The new generation of VPUs promises a big leap in capabilities, allowing even more complex CV/AI applications to be deployed on edge.Any application related to agriculture and farming always captures my attention. For example, there is now a cow tracking and monitoring CV/AI application using drones.Our thanks to Gerard for sharing his perspective with us – keep up with his latest thoughts on LinkedIn and Twitter. And, learn more about how Unity can help your team generate synthetic data to improve computer vision model training with Unity Computer Vision. #whats #next #computer #vision #developer
    UNITY.COM
    What’s next for computer vision: An AI developer weighs in
    In this Q&A, get a glimpse into the future of artificial intelligence (AI) and computer vision through the lens of longtime Unity user Gerard Espona, whose robot digital twin project was featured in the Made with Unity: AI series. Working as simulation lead at Luxonis, whose core technology makes it possible to embed human-level perception into robotics, Espona uses his years of experience in the industry to weigh in on the current state and anticipated progression of computer vision.During recent years, computer vision (CV) and AI have become the fastest-growing fields both in market size and industry adoption rate. Spatial CV and edge AI have been used to improve and automate repetitive tasks as well as complex processes.This new reality is thanks to the democratization of CV/AI. Increasingly affordable hardware access, including depth perception capability as well as improvements in machine learning (ML), has enabled the deployment of real solutions on edge CV/AI systems.Spatial CV using edge AI enables depth-based applications to be deployed without the need of a data center service, and also allows the user to preserve privacy by processing images on the device itself.Along with more accessible hardware, software and machine learning workflows are undergoing important improvements. Although they are still very specialized and full of technical challenges, they have become much more accessible, offering tools that allow users to train their own models.Within the standard ML pipeline/workflow, large-scale edge computing and deployment can still pose issues. One of the biggest general challenges is to reduce the costs and timelines currently required to create and/or improve machine learning models on real-world applications. In other words, the challenge is how to manage all these devices to enable a smooth pipeline for continuous improvement.Also, the implicit limitations in terms of compute processing need extra effort on the final model deployed on the device (that is, apps need to be lightweight, performant, etc.). That said, embedded technology evolves really fast, and each iteration is a big leap in processing capabilities.Spatial CV/AI is a field that still requires a lot of specialization and systems. Workflows are often complicated and tedious due to numerous technical challenges, so a lot of time is devoted to smoothing out the workflow instead of focusing on value-added tasks.Creating datasets (collecting and filtering images and videos), annotating the images, preprocessing/augmentation process, training, deploying and closing the feedback loop for continuous improvement is a complex process. Each step of the workflow is technically difficult and usually involves time and financial cost, and more so for systems working in remote areas with limited connectivity.At Luxonis, we help our customers build and deploy solutions to solve and automate complex tasks at scale, so we’re facing all these issues directly. Our mission, “Robotic vision made simple,” provides not only great and affordable depth-capable hardware, but also a solid and smooth ML pipeline with synthetic datasets and simulation.Another important challenge is the work that needs to be done on the interpretability of models and the creation of datasets from an ethical, privacy and bias point of view.Last but not least, global chip supply issues are making it difficult to get the hardware into everybody’s hands.Data-centric AI is potentially useful when a working model is underperforming. Investing a large amount of time to optimize the model often leads to almost zero real improvement. Instead, with data-centric AI the investment is in analysis, cleaning and improving of the dataset.Usually when a model is underperforming, the issue is within the dataset itself, as there is not enough data for the model to outperform. This could be the result of two possible reasons: 1) the model needs a much larger amount of data, which is difficult to collect in the real world, or 2) the model doesn’t have enough examples of rare cases, which take a lot of time to happen in the real world.In both situations, synthetic datasets could help.Thanks to Unity’s computer vision tools, it is very easy to create photorealistic scenes and randomize elements like materials, light conditions and object placement. The tools come with common labels like 2D bounding boxes, 3D bounding boxes, semantic and instance segmentation, and even human body key points. Additionally, these can be easily extended with custom randomizers, labelers and annotations.Almost any task you want to automate or improve using edge CV/AI very likely involves detecting people for obvious safety and security reasons. It’s critical to guarantee user safety around autonomous systems or robots when they’re working, requiring models to be trained on data about humans.That means we need to capture a large amount of images, including information like poses and physical appearance, that are representative of the entire human population. This task raises some concerns about privacy, ethics and bias when starting to capture real human data to train the model.Fortunately, we can use synthetic datasets to mitigate some of these concerns using human 3D models and poses. A very good example is the work done by the Unity team with PeopleSansPeople.PeopleSansPeople is a human-centric synthetic dataset creator using 3D models and standard animations to randomize human body poses. Also, we can use a Unity project template, to which we add our own 3D models and poses to create our own human synthetic dataset.At Luxonis, we’re using this project as the basis for creating our own human synthetic dataset and training models. In general, we use Unity’s computer vision tools to create large and complex datasets with a high level of customization on labelers, annotations and randomizations. This allows our ML team to iterate faster with our customers, without needing to wait for real-world data collection and manual annotation.Since the introduction of transformer architecture, CV tasks are more accessible. Generative models like DALL-E 2 could also be used to create synthetic datasets, and NeRF as a neural approach to generate novel point of views of known objects and scenes. It’s clear all these innovations are catching the attention of audiences.On the other hand, having access to better annotation tools and model zoos and libraries with pre-trained, ready-to-use models are helping drive wide adoption.One key element contributing to the uptick in computer vision use is the fast evolution of vision processing units (VPUs) that currently allow users to perform model inferences on device (without the need for any host) at 4 TOPS of processing power (current Intel Movidius Myriad X). The new generation of VPUs promises a big leap in capabilities, allowing even more complex CV/AI applications to be deployed on edge.Any application related to agriculture and farming always captures my attention. For example, there is now a cow tracking and monitoring CV/AI application using drones.Our thanks to Gerard for sharing his perspective with us – keep up with his latest thoughts on LinkedIn and Twitter. And, learn more about how Unity can help your team generate synthetic data to improve computer vision model training with Unity Computer Vision.
    Like
    Love
    Wow
    Sad
    Angry
    381
    0 Commenti 0 condivisioni