• It’s absolutely infuriating to see that three former Ubisoft executives have been convicted for sexual assault and psychological harassment, yet they walk free with suspended prison terms! This is not just a failure of the judicial system; it’s a glaring example of how society continues to protect the powerful while victimizing the vulnerable. These men exploited their positions, and what do they get? A slap on the wrist! This sends a horrific message that abuse can go unpunished. We need real justice, not this pathetic excuse for accountability. It’s time to demand more than just token consequences for those in positions of power. Enough is enough!

    #Ubisoft #SexualAssault #JusticeForVictims #Accountability #ToxicCulture
    It’s absolutely infuriating to see that three former Ubisoft executives have been convicted for sexual assault and psychological harassment, yet they walk free with suspended prison terms! This is not just a failure of the judicial system; it’s a glaring example of how society continues to protect the powerful while victimizing the vulnerable. These men exploited their positions, and what do they get? A slap on the wrist! This sends a horrific message that abuse can go unpunished. We need real justice, not this pathetic excuse for accountability. It’s time to demand more than just token consequences for those in positions of power. Enough is enough! #Ubisoft #SexualAssault #JusticeForVictims #Accountability #ToxicCulture
    Former Ubisoft execs convicted for sexual assault, psychological harassment
    A French court has sentenced three former executives to suspended prison terms.
    1 Commentarii 0 Distribuiri
  • How addresses are collected and put on people finder sites

    Published
    June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles!
    Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #how #addresses #are #collected #put
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #how #addresses #are #collected #put
    WWW.FOXNEWS.COM
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE (AI)?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)What are the risks of having your address on people finder sites?The Federal Trade Commission (FTC) advises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Commentarii 0 Distribuiri
  • Harassment by Ubisoft executives left female staff terrified, French court hears

    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date.
    #harassment #ubisoft #executives #left #female
    Harassment by Ubisoft executives left female staff terrified, French court hears
    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date. #harassment #ubisoft #executives #left #female
    WWW.THEGUARDIAN.COM
    Harassment by Ubisoft executives left female staff terrified, French court hears
    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date.
    Like
    Love
    Wow
    Sad
    Angry
    573
    0 Commentarii 0 Distribuiri
  • Diversity Think Tank: Inclusion matters – here’s why you should care

    It has long been said that an organisation’s greatest asset is its people. Employees are the driving force behind innovation, customer engagement, revenue growth, and company culture. In an era where political, social, and economic climates are in constant flux, particularly with ongoing debates surrounding diversity, equity and inclusion, it is more critical than ever for organisations to recognise the value of an inclusive workforce.
    There is a well-known saying: “When America sneezes, the rest of Europe catches a cold.”. It rings particularly true today, as shifts in political and social climates challenge the notion of diversity programmes. This is evident in the recent ruling by the UK Supreme Court that the legal definition of a woman is based on biological sex. However, history has shown that political regimes and societal norms can change rapidly. Regardless of where one stands on these issues, the reality remains that for an organisation to thrive, its people must feel valued, supported, and included.

    Despite the growing focus on DEI programmes since 2020, many past initiatives have not been as effective as hoped. To move forward, the DEI industry and DEI professionals must conduct a rigorous retrospective analysis: What has worked? What hasn’t been effective? How can we improve? Without tangible metrics and data-driven insights, it becomes difficult to measure the success and impact of these initiatives, and this lack of clear outcomes may have contributed to what some define as the “backlash against DEI.”
    A common challenge has been the prioritisation of diversity over inclusion, leaving organisations ill-prepared to integrate diverse talent effectively. This has often resulted in short-term disruption - what change management refers to as the "storming" phase of team development - which in turn has led to team friction, a lack of belonging, and ultimately higher turnover rates among underrepresented employees. Organisations have not allowed enough time for teams to progress to the "norming" and "performing" periods in the face of high pressure to deliver results.
    To counter this, organisations must shift their mindset to focus on inclusion and belonging first. When a workplace fosters an inclusive culture, diverse talent is naturally welcomed, supported, and empowered to succeed. Rather than viewing differences as an obstacle, businesses must embrace them as strengths that drive innovation and growth. I often advocate for culture “add” rather than culture “fit”.
    As a former project and programme manager who transitioned into HR, I have witnessed firsthand the value of applying change management principles to DEI efforts. A successful change programme requires clearly defined goals, strong leadership buy-in, stakeholder engagement, a structured delivery methodology, and measurable outcomes. When these elements are absent, initiatives tend to falter. By adopting a structured, results-oriented, and data-driven approach, organisations can embed true inclusion into their core business strategy rather than treating it as a secondary initiative or a “nice to have”. It’s also important to regularly assess and reflect on what has worked, what hasn’t, and adapt and improve accordingly. In agile methodology, we call these retrospectives.
    Inclusion is key to successful DEI initiatives. In the past, these efforts may have created exclusion by failing to involve those who do not identify with the Equality Act's nine protected characteristics. This has led to defensiveness and fear instead of an understanding of historical inequity. When you are accustomed to privilege, equality can feel like oppression or exclusion and so we need to focus on how we can reframe inclusion work as being beneficial to all rather than to a few. Using storytelling, education, and relatability helps onboard more allies, understanding that equity is crucial to achieve equality. Inclusion means widening opportunities for everyone rather than limiting them to a select few.

    A wealth of research underscores the positive impact of inclusivity on business success. According to CIPD, 70% of employees report that a strong DEI culture positively impacts their job satisfaction. Forbes also discovered that 88% of consumers are more likely to be loyal to a company that supports social and environmental causes.
    Additionally, employees working in inclusive environments are 50% more likely to stay with their current employer for more than three years. Just over half of UK consumers say a brand's diversity and inclusion efforts, influence their purchase decisions. In fact, brands failing to act on Diversity, Equity and Inclusion risk losing out on £102bn annual spend from marginalised groups. Boston Consulting Group’s research demonstrates that organizations with diverse leadership see 19% higher innovation revenues.
    Beyond traditional meritocratic arguments, one principle is clear: inclusivity must be at the heart of every business strategy. Organisations where employees feel seen, heard, and valued naturally attract a broader, more diverse talent pool. Such employees tend to be more engaged, loyal, and productive, further strengthening the organisation's overall success and their bottom line.
    The UK tech industry is poised for continued growth and innovation, with a focus on emerging technologies like AI and quantum computing, however there is also a need to address challenges like talent shortages and international competition to maintain its position as a global leader.  Almost 95% of employers looking for tech talent have encountered a skills shortage in 2022, according to HR and recruitment firm Hays.
    In today’s job market, competitive salaries alone are not enough to attract and retain top talent. Employees now prioritise benefits, flexible working arrangements, career growth opportunities, and a sense of belonging. Organisations that prioritise inclusion, equal opportunities, and adaptability will be better positioned to navigate the evolving talent landscape and sustain long-term success.

    Ultimately, fostering an inclusive workplace is not merely a moral obligation; it is a business imperative. Companies that prioritise inclusion are more likely to attract top diverse talent, enhance employee engagement, and drive sustainable growth. Companies that fail to create inclusive environments are setting themselves up for failure. We are seeing more and more cases of sexual harassment, bullying and discrimination cases with high price tags. So, whether through loss of business, bad publicity or legal consequences, the price tag on exclusion can be staggering.
    Inclusion should not be seen as a separate HR initiative but as an integral part of an organisation’s DNA with all leaders owning an inclusion goal as part of their performance management. What gets measured, gets done! However, this can only happen if leaders and managers understand what inclusion truly means and they recognise that a diversity of voices, experiences and opinions will benefit their teams rather than hinder them.
    The future of work is about more than just employment—it is about providing opportunities for people to live, support their families, and achieve personal and professional growth. A poll, conducted by Ipsos for PA Mediapoint, indicates widespread support among the British public for key workplace DEI drives, including flexible working, gender pay gap reporting, and inclusivity training. People care about wellbeing, inclusion and culture, which is why it is so important that organisations create workplaces where everyone is valued, empowered, and given the chance to succeed. True prosperity comes from ensuring that every individual, regardless of background and differences, can flourish. So, Inclusion does matter, particularly if you value creating a positive work environment that benefits employees, impacts the bottom line, and ensures everyone feels included rather than excluded.

    about DEI in tech

    A lack of work-life balance and discrimination are among the biggest challenges for women in tech, finds Lorien
    When asked their opinions on the growing use of AI, girls expressed concerns about possible biases it will perpetuate, while boys were worried about cyber security
    #diversity #think #tank #inclusion #matters
    Diversity Think Tank: Inclusion matters – here’s why you should care
    It has long been said that an organisation’s greatest asset is its people. Employees are the driving force behind innovation, customer engagement, revenue growth, and company culture. In an era where political, social, and economic climates are in constant flux, particularly with ongoing debates surrounding diversity, equity and inclusion, it is more critical than ever for organisations to recognise the value of an inclusive workforce. There is a well-known saying: “When America sneezes, the rest of Europe catches a cold.”. It rings particularly true today, as shifts in political and social climates challenge the notion of diversity programmes. This is evident in the recent ruling by the UK Supreme Court that the legal definition of a woman is based on biological sex. However, history has shown that political regimes and societal norms can change rapidly. Regardless of where one stands on these issues, the reality remains that for an organisation to thrive, its people must feel valued, supported, and included. Despite the growing focus on DEI programmes since 2020, many past initiatives have not been as effective as hoped. To move forward, the DEI industry and DEI professionals must conduct a rigorous retrospective analysis: What has worked? What hasn’t been effective? How can we improve? Without tangible metrics and data-driven insights, it becomes difficult to measure the success and impact of these initiatives, and this lack of clear outcomes may have contributed to what some define as the “backlash against DEI.” A common challenge has been the prioritisation of diversity over inclusion, leaving organisations ill-prepared to integrate diverse talent effectively. This has often resulted in short-term disruption - what change management refers to as the "storming" phase of team development - which in turn has led to team friction, a lack of belonging, and ultimately higher turnover rates among underrepresented employees. Organisations have not allowed enough time for teams to progress to the "norming" and "performing" periods in the face of high pressure to deliver results. To counter this, organisations must shift their mindset to focus on inclusion and belonging first. When a workplace fosters an inclusive culture, diverse talent is naturally welcomed, supported, and empowered to succeed. Rather than viewing differences as an obstacle, businesses must embrace them as strengths that drive innovation and growth. I often advocate for culture “add” rather than culture “fit”. As a former project and programme manager who transitioned into HR, I have witnessed firsthand the value of applying change management principles to DEI efforts. A successful change programme requires clearly defined goals, strong leadership buy-in, stakeholder engagement, a structured delivery methodology, and measurable outcomes. When these elements are absent, initiatives tend to falter. By adopting a structured, results-oriented, and data-driven approach, organisations can embed true inclusion into their core business strategy rather than treating it as a secondary initiative or a “nice to have”. It’s also important to regularly assess and reflect on what has worked, what hasn’t, and adapt and improve accordingly. In agile methodology, we call these retrospectives. Inclusion is key to successful DEI initiatives. In the past, these efforts may have created exclusion by failing to involve those who do not identify with the Equality Act's nine protected characteristics. This has led to defensiveness and fear instead of an understanding of historical inequity. When you are accustomed to privilege, equality can feel like oppression or exclusion and so we need to focus on how we can reframe inclusion work as being beneficial to all rather than to a few. Using storytelling, education, and relatability helps onboard more allies, understanding that equity is crucial to achieve equality. Inclusion means widening opportunities for everyone rather than limiting them to a select few. A wealth of research underscores the positive impact of inclusivity on business success. According to CIPD, 70% of employees report that a strong DEI culture positively impacts their job satisfaction. Forbes also discovered that 88% of consumers are more likely to be loyal to a company that supports social and environmental causes. Additionally, employees working in inclusive environments are 50% more likely to stay with their current employer for more than three years. Just over half of UK consumers say a brand's diversity and inclusion efforts, influence their purchase decisions. In fact, brands failing to act on Diversity, Equity and Inclusion risk losing out on £102bn annual spend from marginalised groups. Boston Consulting Group’s research demonstrates that organizations with diverse leadership see 19% higher innovation revenues. Beyond traditional meritocratic arguments, one principle is clear: inclusivity must be at the heart of every business strategy. Organisations where employees feel seen, heard, and valued naturally attract a broader, more diverse talent pool. Such employees tend to be more engaged, loyal, and productive, further strengthening the organisation's overall success and their bottom line. The UK tech industry is poised for continued growth and innovation, with a focus on emerging technologies like AI and quantum computing, however there is also a need to address challenges like talent shortages and international competition to maintain its position as a global leader.  Almost 95% of employers looking for tech talent have encountered a skills shortage in 2022, according to HR and recruitment firm Hays. In today’s job market, competitive salaries alone are not enough to attract and retain top talent. Employees now prioritise benefits, flexible working arrangements, career growth opportunities, and a sense of belonging. Organisations that prioritise inclusion, equal opportunities, and adaptability will be better positioned to navigate the evolving talent landscape and sustain long-term success. Ultimately, fostering an inclusive workplace is not merely a moral obligation; it is a business imperative. Companies that prioritise inclusion are more likely to attract top diverse talent, enhance employee engagement, and drive sustainable growth. Companies that fail to create inclusive environments are setting themselves up for failure. We are seeing more and more cases of sexual harassment, bullying and discrimination cases with high price tags. So, whether through loss of business, bad publicity or legal consequences, the price tag on exclusion can be staggering. Inclusion should not be seen as a separate HR initiative but as an integral part of an organisation’s DNA with all leaders owning an inclusion goal as part of their performance management. What gets measured, gets done! However, this can only happen if leaders and managers understand what inclusion truly means and they recognise that a diversity of voices, experiences and opinions will benefit their teams rather than hinder them. The future of work is about more than just employment—it is about providing opportunities for people to live, support their families, and achieve personal and professional growth. A poll, conducted by Ipsos for PA Mediapoint, indicates widespread support among the British public for key workplace DEI drives, including flexible working, gender pay gap reporting, and inclusivity training. People care about wellbeing, inclusion and culture, which is why it is so important that organisations create workplaces where everyone is valued, empowered, and given the chance to succeed. True prosperity comes from ensuring that every individual, regardless of background and differences, can flourish. So, Inclusion does matter, particularly if you value creating a positive work environment that benefits employees, impacts the bottom line, and ensures everyone feels included rather than excluded. about DEI in tech A lack of work-life balance and discrimination are among the biggest challenges for women in tech, finds Lorien When asked their opinions on the growing use of AI, girls expressed concerns about possible biases it will perpetuate, while boys were worried about cyber security #diversity #think #tank #inclusion #matters
    WWW.COMPUTERWEEKLY.COM
    Diversity Think Tank: Inclusion matters – here’s why you should care
    It has long been said that an organisation’s greatest asset is its people. Employees are the driving force behind innovation, customer engagement, revenue growth, and company culture. In an era where political, social, and economic climates are in constant flux, particularly with ongoing debates surrounding diversity, equity and inclusion (DEI), it is more critical than ever for organisations to recognise the value of an inclusive workforce. There is a well-known saying: “When America sneezes, the rest of Europe catches a cold.” (often attributed to Charles Maurice de Talleyrand, a French diplomat from the 18th and 19th centuries). It rings particularly true today, as shifts in political and social climates challenge the notion of diversity programmes. This is evident in the recent ruling by the UK Supreme Court that the legal definition of a woman is based on biological sex. However, history has shown that political regimes and societal norms can change rapidly. Regardless of where one stands on these issues, the reality remains that for an organisation to thrive, its people must feel valued, supported, and included. Despite the growing focus on DEI programmes since 2020, many past initiatives have not been as effective as hoped. To move forward, the DEI industry and DEI professionals must conduct a rigorous retrospective analysis: What has worked? What hasn’t been effective? How can we improve? Without tangible metrics and data-driven insights, it becomes difficult to measure the success and impact of these initiatives, and this lack of clear outcomes may have contributed to what some define as the “backlash against DEI.” A common challenge has been the prioritisation of diversity over inclusion, leaving organisations ill-prepared to integrate diverse talent effectively. This has often resulted in short-term disruption - what change management refers to as the "storming" phase of team development - which in turn has led to team friction, a lack of belonging, and ultimately higher turnover rates among underrepresented employees. Organisations have not allowed enough time for teams to progress to the "norming" and "performing" periods in the face of high pressure to deliver results. To counter this, organisations must shift their mindset to focus on inclusion and belonging first. When a workplace fosters an inclusive culture, diverse talent is naturally welcomed, supported, and empowered to succeed. Rather than viewing differences as an obstacle, businesses must embrace them as strengths that drive innovation and growth. I often advocate for culture “add” rather than culture “fit”. As a former project and programme manager who transitioned into HR, I have witnessed firsthand the value of applying change management principles to DEI efforts. A successful change programme requires clearly defined goals, strong leadership buy-in, stakeholder engagement, a structured delivery methodology, and measurable outcomes. When these elements are absent, initiatives tend to falter. By adopting a structured, results-oriented, and data-driven approach, organisations can embed true inclusion into their core business strategy rather than treating it as a secondary initiative or a “nice to have”. It’s also important to regularly assess and reflect on what has worked, what hasn’t, and adapt and improve accordingly. In agile methodology, we call these retrospectives. Inclusion is key to successful DEI initiatives. In the past, these efforts may have created exclusion by failing to involve those who do not identify with the Equality Act's nine protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation). This has led to defensiveness and fear instead of an understanding of historical inequity. When you are accustomed to privilege, equality can feel like oppression or exclusion and so we need to focus on how we can reframe inclusion work as being beneficial to all rather than to a few. Using storytelling, education, and relatability helps onboard more allies, understanding that equity is crucial to achieve equality. Inclusion means widening opportunities for everyone rather than limiting them to a select few. A wealth of research underscores the positive impact of inclusivity on business success. According to CIPD, 70% of employees report that a strong DEI culture positively impacts their job satisfaction. Forbes also discovered that 88% of consumers are more likely to be loyal to a company that supports social and environmental causes. Additionally, employees working in inclusive environments are 50% more likely to stay with their current employer for more than three years. Just over half of UK consumers (53%) say a brand's diversity and inclusion efforts, influence their purchase decisions. In fact, brands failing to act on Diversity, Equity and Inclusion risk losing out on £102bn annual spend from marginalised groups. Boston Consulting Group’s research demonstrates that organizations with diverse leadership see 19% higher innovation revenues. Beyond traditional meritocratic arguments, one principle is clear: inclusivity must be at the heart of every business strategy. Organisations where employees feel seen, heard, and valued naturally attract a broader, more diverse talent pool. Such employees tend to be more engaged, loyal, and productive, further strengthening the organisation's overall success and their bottom line. The UK tech industry is poised for continued growth and innovation, with a focus on emerging technologies like AI and quantum computing, however there is also a need to address challenges like talent shortages and international competition to maintain its position as a global leader.  Almost 95% of employers looking for tech talent have encountered a skills shortage in 2022, according to HR and recruitment firm Hays. In today’s job market, competitive salaries alone are not enough to attract and retain top talent. Employees now prioritise benefits, flexible working arrangements, career growth opportunities, and a sense of belonging. Organisations that prioritise inclusion, equal opportunities, and adaptability will be better positioned to navigate the evolving talent landscape and sustain long-term success. Ultimately, fostering an inclusive workplace is not merely a moral obligation; it is a business imperative. Companies that prioritise inclusion are more likely to attract top diverse talent, enhance employee engagement, and drive sustainable growth. Companies that fail to create inclusive environments are setting themselves up for failure. We are seeing more and more cases of sexual harassment, bullying and discrimination cases with high price tags. So, whether through loss of business, bad publicity or legal consequences, the price tag on exclusion can be staggering. Inclusion should not be seen as a separate HR initiative but as an integral part of an organisation’s DNA with all leaders owning an inclusion goal as part of their performance management. What gets measured, gets done! However, this can only happen if leaders and managers understand what inclusion truly means and they recognise that a diversity of voices, experiences and opinions will benefit their teams rather than hinder them. The future of work is about more than just employment—it is about providing opportunities for people to live, support their families, and achieve personal and professional growth. A poll, conducted by Ipsos for PA Mediapoint, indicates widespread support among the British public for key workplace DEI drives, including flexible working (71%), gender pay gap reporting (65%), and inclusivity training (64%). People care about wellbeing, inclusion and culture, which is why it is so important that organisations create workplaces where everyone is valued, empowered, and given the chance to succeed. True prosperity comes from ensuring that every individual, regardless of background and differences, can flourish. So, Inclusion does matter, particularly if you value creating a positive work environment that benefits employees, impacts the bottom line, and ensures everyone feels included rather than excluded. Read more about DEI in tech A lack of work-life balance and discrimination are among the biggest challenges for women in tech, finds Lorien When asked their opinions on the growing use of AI, girls expressed concerns about possible biases it will perpetuate, while boys were worried about cyber security
    Like
    Love
    Wow
    Angry
    Sad
    194
    0 Commentarii 0 Distribuiri
  • If You Thought Facebook Was Toxic Already, Now It's Replacing Its Human Moderators with AI

    Few companies in the history of capitalism have amassed as much wealth and influence as Meta.A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain.In spite of its immense influence, none of its internal algorithms can be scrutinized by public watchdogs. Its host country, the United States, has largely turned a blind eye to its dealings in exchange for free use of Meta's vast surveillance capabilities.That lack of oversight coupled with Meta's near-omnipresence as a social utility has had devastating consequences throughout the world, manifesting in crises like the genocide of Muslims in Myanmar, or the systemic suppression of Palestinian rights organizations.How do you uncover the harms caused by one of the most powerful companies on earth? In the case of public violence, the evidence isn't hard to trace. However, Meta's unprecedented corporate dynasty also creates less obvious harms, which scores of scholars, researchers, and journalists are devoting entire careers to uncovering.One prominent group of said investigators is GLAAD, the Gay & Lesbian Alliance Against Defamation, which recently released its annual report on social media safety, privacy, and expression for LGBTQ people.The report notes that Meta has undergone a "particularly extreme" ideological shift over the past year, adding harmful exceptions to its content moderation policies while disproportionately suppressing LGBTQ users and their content. The tech giant has also failed to give LGBTQ users sovereignty over their own personal data, which it collects, analyzes, and wields to generate huge profits.While Meta collects all of our data — from which it draws over 95 percent of its revenue — the practice is particularly harmful to LGBTQ users, who then have to contend with algorithmic biases, non-consensual outing, harassment, and in some countries state oppression."It's a dangerous time, certainly for trans people, who as a minority have been so ridiculously maligned, but also a dangerous time for gay people, openly bipeople, people who are different in any way," says Sarah Roberts, a UCLA professor and Director of the Center for Critical Internet Inquiry.To address these shortcomings and the dangers they introduce, GLAAD made a number of recommendations. One key suggestion was to improve moderation "by providing training for all content moderators focused on LGBTQ safety, privacy and expression." The media advocacy group doesn't mince words, adding that "AI systems should be used to flag for human review, not for automated removals."However, it doesn't look like Meta got the message.Weeks after GLAAD issued its findings, internal Meta documents leaked to NPR revealed the company's plan to hand 90 percent of its privacy and integrity reviews over to "artificial intelligence."This will impact nearly every new feature introduced to its platforms, where human moderators would typically evaluate new features for risks to privacy and safety, and the wellbeing of user groups like minors, immigrants, and LGBTQ people.Meta's internal risk assessment is an already opaque process, and Roberts notes that government attempts at risk oversight, like the EU's Digital Services Act, are likewise a labyrinth of filings which are largely dictated by the social media companies themselves. AI, chock full of biases and prone to errors — as admitted by Meta's own AI chief — is certain to make the situation even worse.Earlier this week, meanwhile, the Wall Street Journal revealed Meta's plans to fully automate advertising via the company's generative AI software, which will allow advertisers to "fully create and target ads" directly, with no human in the loop.This includes hyper-personalized ads, writes the WSJ, "so that users see different versions of the same ad in real time, based on factors such as geolocation."Data hoarders like Meta — which track you even when you're not using its platforms — have long been able to profile LGBTQ users based on gender identify and sexual orientation, including those who aren't publicly out.Removing any human from these already sinister practices serves to streamline operations and distance Meta from its own actions — "we didn't out gay users living under an oppressive government," the company can say, "even if our AI did." It's no coincidence that Meta had already disbanded its "Responsible AI" team as early as 2023.At the root of these decisions — Meta CEO Mark Zuckerberg's right wing turn notwithstanding — is the calculated drive to maximize revenue."If there's no reason to rigorously moderate harmful content, then why pay so many content moderators? Why engage researchers to look into the circulation of this kind of content?" observes Roberts. "There ends up being a real cost savings there.""One of the things I've always said is that content moderation of social media is not primarily about protecting people, it's about brand management," she told Futurism. "It's about the platform managing its brand in order to make the most hospitable environment for advertisers."Sometimes these corporate priorities line up with progressive causes, like LGBTQ user safety or voter registration. But when they don't, Roberts notes, "dollars are dollars.""We are looking at multibillion-dollar companies, the most capitalized companies in the world, who have operated with impunity for many, many years," she said. "How do you convince them that they should care, when other powerful sectors are telling them the opposite?"Share This Article
    #you #thought #facebook #was #toxic
    If You Thought Facebook Was Toxic Already, Now It's Replacing Its Human Moderators with AI
    Few companies in the history of capitalism have amassed as much wealth and influence as Meta.A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain.In spite of its immense influence, none of its internal algorithms can be scrutinized by public watchdogs. Its host country, the United States, has largely turned a blind eye to its dealings in exchange for free use of Meta's vast surveillance capabilities.That lack of oversight coupled with Meta's near-omnipresence as a social utility has had devastating consequences throughout the world, manifesting in crises like the genocide of Muslims in Myanmar, or the systemic suppression of Palestinian rights organizations.How do you uncover the harms caused by one of the most powerful companies on earth? In the case of public violence, the evidence isn't hard to trace. However, Meta's unprecedented corporate dynasty also creates less obvious harms, which scores of scholars, researchers, and journalists are devoting entire careers to uncovering.One prominent group of said investigators is GLAAD, the Gay & Lesbian Alliance Against Defamation, which recently released its annual report on social media safety, privacy, and expression for LGBTQ people.The report notes that Meta has undergone a "particularly extreme" ideological shift over the past year, adding harmful exceptions to its content moderation policies while disproportionately suppressing LGBTQ users and their content. The tech giant has also failed to give LGBTQ users sovereignty over their own personal data, which it collects, analyzes, and wields to generate huge profits.While Meta collects all of our data — from which it draws over 95 percent of its revenue — the practice is particularly harmful to LGBTQ users, who then have to contend with algorithmic biases, non-consensual outing, harassment, and in some countries state oppression."It's a dangerous time, certainly for trans people, who as a minority have been so ridiculously maligned, but also a dangerous time for gay people, openly bipeople, people who are different in any way," says Sarah Roberts, a UCLA professor and Director of the Center for Critical Internet Inquiry.To address these shortcomings and the dangers they introduce, GLAAD made a number of recommendations. One key suggestion was to improve moderation "by providing training for all content moderators focused on LGBTQ safety, privacy and expression." The media advocacy group doesn't mince words, adding that "AI systems should be used to flag for human review, not for automated removals."However, it doesn't look like Meta got the message.Weeks after GLAAD issued its findings, internal Meta documents leaked to NPR revealed the company's plan to hand 90 percent of its privacy and integrity reviews over to "artificial intelligence."This will impact nearly every new feature introduced to its platforms, where human moderators would typically evaluate new features for risks to privacy and safety, and the wellbeing of user groups like minors, immigrants, and LGBTQ people.Meta's internal risk assessment is an already opaque process, and Roberts notes that government attempts at risk oversight, like the EU's Digital Services Act, are likewise a labyrinth of filings which are largely dictated by the social media companies themselves. AI, chock full of biases and prone to errors — as admitted by Meta's own AI chief — is certain to make the situation even worse.Earlier this week, meanwhile, the Wall Street Journal revealed Meta's plans to fully automate advertising via the company's generative AI software, which will allow advertisers to "fully create and target ads" directly, with no human in the loop.This includes hyper-personalized ads, writes the WSJ, "so that users see different versions of the same ad in real time, based on factors such as geolocation."Data hoarders like Meta — which track you even when you're not using its platforms — have long been able to profile LGBTQ users based on gender identify and sexual orientation, including those who aren't publicly out.Removing any human from these already sinister practices serves to streamline operations and distance Meta from its own actions — "we didn't out gay users living under an oppressive government," the company can say, "even if our AI did." It's no coincidence that Meta had already disbanded its "Responsible AI" team as early as 2023.At the root of these decisions — Meta CEO Mark Zuckerberg's right wing turn notwithstanding — is the calculated drive to maximize revenue."If there's no reason to rigorously moderate harmful content, then why pay so many content moderators? Why engage researchers to look into the circulation of this kind of content?" observes Roberts. "There ends up being a real cost savings there.""One of the things I've always said is that content moderation of social media is not primarily about protecting people, it's about brand management," she told Futurism. "It's about the platform managing its brand in order to make the most hospitable environment for advertisers."Sometimes these corporate priorities line up with progressive causes, like LGBTQ user safety or voter registration. But when they don't, Roberts notes, "dollars are dollars.""We are looking at multibillion-dollar companies, the most capitalized companies in the world, who have operated with impunity for many, many years," she said. "How do you convince them that they should care, when other powerful sectors are telling them the opposite?"Share This Article #you #thought #facebook #was #toxic
    FUTURISM.COM
    If You Thought Facebook Was Toxic Already, Now It's Replacing Its Human Moderators with AI
    Few companies in the history of capitalism have amassed as much wealth and influence as Meta.A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of $1.68 trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain.In spite of its immense influence, none of its internal algorithms can be scrutinized by public watchdogs. Its host country, the United States, has largely turned a blind eye to its dealings in exchange for free use of Meta's vast surveillance capabilities.That lack of oversight coupled with Meta's near-omnipresence as a social utility has had devastating consequences throughout the world, manifesting in crises like the genocide of Muslims in Myanmar, or the systemic suppression of Palestinian rights organizations.How do you uncover the harms caused by one of the most powerful companies on earth? In the case of public violence, the evidence isn't hard to trace. However, Meta's unprecedented corporate dynasty also creates less obvious harms, which scores of scholars, researchers, and journalists are devoting entire careers to uncovering.One prominent group of said investigators is GLAAD, the Gay & Lesbian Alliance Against Defamation, which recently released its annual report on social media safety, privacy, and expression for LGBTQ people.The report notes that Meta has undergone a "particularly extreme" ideological shift over the past year, adding harmful exceptions to its content moderation policies while disproportionately suppressing LGBTQ users and their content. The tech giant has also failed to give LGBTQ users sovereignty over their own personal data, which it collects, analyzes, and wields to generate huge profits.While Meta collects all of our data — from which it draws over 95 percent of its revenue — the practice is particularly harmful to LGBTQ users, who then have to contend with algorithmic biases, non-consensual outing, harassment, and in some countries state oppression."It's a dangerous time, certainly for trans people, who as a minority have been so ridiculously maligned, but also a dangerous time for gay people, openly bi[sexual] people, people who are different in any way," says Sarah Roberts, a UCLA professor and Director of the Center for Critical Internet Inquiry.To address these shortcomings and the dangers they introduce, GLAAD made a number of recommendations. One key suggestion was to improve moderation "by providing training for all content moderators focused on LGBTQ safety, privacy and expression." The media advocacy group doesn't mince words, adding that "AI systems should be used to flag for human review, not for automated removals."However, it doesn't look like Meta got the message.Weeks after GLAAD issued its findings, internal Meta documents leaked to NPR revealed the company's plan to hand 90 percent of its privacy and integrity reviews over to "artificial intelligence."This will impact nearly every new feature introduced to its platforms, where human moderators would typically evaluate new features for risks to privacy and safety, and the wellbeing of user groups like minors, immigrants, and LGBTQ people.Meta's internal risk assessment is an already opaque process, and Roberts notes that government attempts at risk oversight, like the EU's Digital Services Act, are likewise a labyrinth of filings which are largely dictated by the social media companies themselves. AI, chock full of biases and prone to errors — as admitted by Meta's own AI chief — is certain to make the situation even worse.Earlier this week, meanwhile, the Wall Street Journal revealed Meta's plans to fully automate advertising via the company's generative AI software, which will allow advertisers to "fully create and target ads" directly, with no human in the loop.This includes hyper-personalized ads, writes the WSJ, "so that users see different versions of the same ad in real time, based on factors such as geolocation."Data hoarders like Meta — which track you even when you're not using its platforms — have long been able to profile LGBTQ users based on gender identify and sexual orientation, including those who aren't publicly out.Removing any human from these already sinister practices serves to streamline operations and distance Meta from its own actions — "we didn't out gay users living under an oppressive government," the company can say, "even if our AI did." It's no coincidence that Meta had already disbanded its "Responsible AI" team as early as 2023.At the root of these decisions — Meta CEO Mark Zuckerberg's right wing turn notwithstanding — is the calculated drive to maximize revenue."If there's no reason to rigorously moderate harmful content, then why pay so many content moderators? Why engage researchers to look into the circulation of this kind of content?" observes Roberts. "There ends up being a real cost savings there.""One of the things I've always said is that content moderation of social media is not primarily about protecting people, it's about brand management," she told Futurism. "It's about the platform managing its brand in order to make the most hospitable environment for advertisers."Sometimes these corporate priorities line up with progressive causes, like LGBTQ user safety or voter registration. But when they don't, Roberts notes, "dollars are dollars.""We are looking at multibillion-dollar companies, the most capitalized companies in the world, who have operated with impunity for many, many years," she said. "How do you convince them that they should care, when other powerful sectors are telling them the opposite?"Share This Article
    Like
    Love
    Wow
    Angry
    Sad
    378
    0 Commentarii 0 Distribuiri
  • Facebook sees rise in violent content and harassment after policy changes

    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.
    The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas."
    There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March.Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day.The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billionAt the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress."
    Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users.
    The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies."
    The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at
    #facebook #sees #rise #violent #content
    Facebook sees rise in violent content and harassment after policy changes
    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta. The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas." There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March.Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day.The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billionAt the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress." Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users. The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies." The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at #facebook #sees #rise #violent #content
    WWW.ENGADGET.COM
    Facebook sees rise in violent content and harassment after policy changes
    Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company's hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta. The reports are the first time Meta has shared data about how Zuckerberg's decision to upend Meta's policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules "largely remained unchanged for most problem areas." There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to "an increase in sharing of violating content" as well as its own attempts to "reduce enforcement mistakes." Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified "spike" in violations in March. (Notably, this is a separate category from the company's hate speech policies, which were re-written to allow posts targeting immigrants and LGBTQ people.) Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day. (Meta describes its prevalence metric as an estimate of how often rule-breaking content appears on its platform.) The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta's report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content "actioned" under the policy, the company's lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn't provide stats around fake account removals on Instagram.) At the same time, Meta claims it's making far fewer content moderation mistakes, which was one of Zuckerberg's main justifications for his decision to end proactive moderation."We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the company wrote in an update to its January post announcing its policy changes. Meta didn't explain how it calculated that figure, but said future reports would "include metrics on our mistakes so that people can track our progress." Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. "At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible," the company wrote. "That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying." Meta has been rolling out "teen accounts" for the last several months, which should make it easier to filter content specifically for younger users. The company also offered an update on how it's using large language models to aid in its content moderation efforts. "Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas," Meta writes. "We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies." The other major component to Zuckerberg's policy changes was an end of Meta's fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn't offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-sees-rise-in-violent-content-and-harassment-after-policy-changes-182651544.html?src=rss
    0 Commentarii 0 Distribuiri
  • Meta will reportedly soon use AI for most product risk assessments instead of human reviewers

    According to a report from NPR, Meta plans to shift the task of assessing its products' potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify.
    Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months. Now, according to NPR, product teams have to fill out a questionnaire about their product and submit this for review by the AI system, which generally provides an "instant decision" that includes the risk areas it's identified. They'll then have to address whatever requirements it laid out to resolve the issues before the product can be released.
    A former Meta executive told NPR that reducing scrutiny "means you're creating higher risks. Negative externalities of product changes are less likely to be prevented before they start causing problems in the world." In a statement to NPR, Meta said it would still tap "human expertise" to evaluate "novel and complex issues," and leave the "low-risk decisions" to AI. Read the full report over at NPR.
    It comes a few days after Meta released its latest quarterly integrity reports — the first since changing its policies on content moderation and fact-checking earlier this year. The amount of content taken down has unsurprisingly decreased in the wake of the changes, per the report. But there was a small rise in bullying and harassment, as well as violent and graphic content.This article originally appeared on Engadget at
    #meta #will #reportedly #soon #use
    Meta will reportedly soon use AI for most product risk assessments instead of human reviewers
    According to a report from NPR, Meta plans to shift the task of assessing its products' potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify. Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months. Now, according to NPR, product teams have to fill out a questionnaire about their product and submit this for review by the AI system, which generally provides an "instant decision" that includes the risk areas it's identified. They'll then have to address whatever requirements it laid out to resolve the issues before the product can be released. A former Meta executive told NPR that reducing scrutiny "means you're creating higher risks. Negative externalities of product changes are less likely to be prevented before they start causing problems in the world." In a statement to NPR, Meta said it would still tap "human expertise" to evaluate "novel and complex issues," and leave the "low-risk decisions" to AI. Read the full report over at NPR. It comes a few days after Meta released its latest quarterly integrity reports — the first since changing its policies on content moderation and fact-checking earlier this year. The amount of content taken down has unsurprisingly decreased in the wake of the changes, per the report. But there was a small rise in bullying and harassment, as well as violent and graphic content.This article originally appeared on Engadget at #meta #will #reportedly #soon #use
    WWW.ENGADGET.COM
    Meta will reportedly soon use AI for most product risk assessments instead of human reviewers
    According to a report from NPR, Meta plans to shift the task of assessing its products' potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify. Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months. Now, according to NPR, product teams have to fill out a questionnaire about their product and submit this for review by the AI system, which generally provides an "instant decision" that includes the risk areas it's identified. They'll then have to address whatever requirements it laid out to resolve the issues before the product can be released. A former Meta executive told NPR that reducing scrutiny "means you're creating higher risks. Negative externalities of product changes are less likely to be prevented before they start causing problems in the world." In a statement to NPR, Meta said it would still tap "human expertise" to evaluate "novel and complex issues," and leave the "low-risk decisions" to AI. Read the full report over at NPR. It comes a few days after Meta released its latest quarterly integrity reports — the first since changing its policies on content moderation and fact-checking earlier this year. The amount of content taken down has unsurprisingly decreased in the wake of the changes, per the report. But there was a small rise in bullying and harassment, as well as violent and graphic content.This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-will-reportedly-soon-use-ai-for-most-product-risk-assessments-instead-of-human-reviewers-205416849.html?src=rss
    0 Commentarii 0 Distribuiri
  • Texas Solicitor General Resigns After Sharing Bizarre Fantasy About An Asteroid

    Content warning for discussions of sexual violence and harassment.Usually asteroids are distant features of the cosmos, occasionally crashing down to Earth or threatening the planet.Not so for former Texas solicitor general Judd Stone, who's been accused of making the distant space rocks a focal point in violent and bizarre fantasies about a coworker that he regaled to other people.Needless to say, that's wildly inappropriate and unacceptable. As 404 Media reports, Stone has now resigned from his position after a damning letter aired the allegations, which involved — apologies in advance — a phallic asteroid used as a sexual implement, like some sort of grotesque riff on a Chuck Tingle book.According to a letter sent by Texas' first assistant attorney general Brent Webster, Stone — who had at the time taken a leave of absence to defend Texas attorney general Ken Paxton in his impeachment trial — joked during a 2023 lunching with other government employees about a "disturbing sexual fantasy" that involved a "cylindrical asteroid." During the debacle, Stone described using said asteroid to sexually assault Webster while his wife and children watched.That letter, which is five pages long and full of additional allegations of sexual harassment and lies Stone allegedly told, is replete with gory details about this case that we won't regale you with.What's striking to us at Futurism, however, is the "cylindrical asteroid" of it all. Where did Texas's now-former solicitor general get such an idea, and what could it mean about who he is as a person — and, more importantly, how did it affect the people he worked with?While we don't have answers to those first two, it's quite clear from the letter how Stone's gruesome asteroid "joke" affected him and his colleagues. Along with Webster's own concerns about Stone's violent state of mind and his fear that his family could be in danger, the assistant AG added that a female employee who had been present for that stomach-turning lunch discussion had been so upset by the topic that she excused herself — only to return to japes from others at the table who said she "couldn't handle people talking about dicks."That same woman "exhibited emotional distress" when recounting the anecdote to Webster, and also told him, through tears, that she had been sexually harassed on other occasions by Stone and was concerned about the way he treated women.When confronted with the sexual harassment allegations against him, Stone admitted to them all immediately, including the bizarre asteroid fantasy. He was, as 404 notes, given the grace to quit or be fired, and chose the former.More on Texas-based misogyny: In Leaked Text, Elon Musk Harangued Woman to Have as Many of His Babies as PossibleShare This Article
    #texas #solicitor #general #resigns #after
    Texas Solicitor General Resigns After Sharing Bizarre Fantasy About An Asteroid
    Content warning for discussions of sexual violence and harassment.Usually asteroids are distant features of the cosmos, occasionally crashing down to Earth or threatening the planet.Not so for former Texas solicitor general Judd Stone, who's been accused of making the distant space rocks a focal point in violent and bizarre fantasies about a coworker that he regaled to other people.Needless to say, that's wildly inappropriate and unacceptable. As 404 Media reports, Stone has now resigned from his position after a damning letter aired the allegations, which involved — apologies in advance — a phallic asteroid used as a sexual implement, like some sort of grotesque riff on a Chuck Tingle book.According to a letter sent by Texas' first assistant attorney general Brent Webster, Stone — who had at the time taken a leave of absence to defend Texas attorney general Ken Paxton in his impeachment trial — joked during a 2023 lunching with other government employees about a "disturbing sexual fantasy" that involved a "cylindrical asteroid." During the debacle, Stone described using said asteroid to sexually assault Webster while his wife and children watched.That letter, which is five pages long and full of additional allegations of sexual harassment and lies Stone allegedly told, is replete with gory details about this case that we won't regale you with.What's striking to us at Futurism, however, is the "cylindrical asteroid" of it all. Where did Texas's now-former solicitor general get such an idea, and what could it mean about who he is as a person — and, more importantly, how did it affect the people he worked with?While we don't have answers to those first two, it's quite clear from the letter how Stone's gruesome asteroid "joke" affected him and his colleagues. Along with Webster's own concerns about Stone's violent state of mind and his fear that his family could be in danger, the assistant AG added that a female employee who had been present for that stomach-turning lunch discussion had been so upset by the topic that she excused herself — only to return to japes from others at the table who said she "couldn't handle people talking about dicks."That same woman "exhibited emotional distress" when recounting the anecdote to Webster, and also told him, through tears, that she had been sexually harassed on other occasions by Stone and was concerned about the way he treated women.When confronted with the sexual harassment allegations against him, Stone admitted to them all immediately, including the bizarre asteroid fantasy. He was, as 404 notes, given the grace to quit or be fired, and chose the former.More on Texas-based misogyny: In Leaked Text, Elon Musk Harangued Woman to Have as Many of His Babies as PossibleShare This Article #texas #solicitor #general #resigns #after
    FUTURISM.COM
    Texas Solicitor General Resigns After Sharing Bizarre Fantasy About An Asteroid
    Content warning for discussions of sexual violence and harassment.Usually asteroids are distant features of the cosmos, occasionally crashing down to Earth or threatening the planet.Not so for former Texas solicitor general Judd Stone, who's been accused of making the distant space rocks a focal point in violent and bizarre fantasies about a coworker that he regaled to other people.Needless to say, that's wildly inappropriate and unacceptable. As 404 Media reports, Stone has now resigned from his position after a damning letter aired the allegations, which involved — apologies in advance — a phallic asteroid used as a sexual implement, like some sort of grotesque riff on a Chuck Tingle book.According to a letter sent by Texas' first assistant attorney general Brent Webster, Stone — who had at the time taken a leave of absence to defend Texas attorney general Ken Paxton in his impeachment trial — joked during a 2023 lunching with other government employees about a "disturbing sexual fantasy" that involved a "cylindrical asteroid." During the debacle, Stone described using said asteroid to sexually assault Webster while his wife and children watched.That letter, which is five pages long and full of additional allegations of sexual harassment and lies Stone allegedly told, is replete with gory details about this case that we won't regale you with.What's striking to us at Futurism, however, is the "cylindrical asteroid" of it all. Where did Texas's now-former solicitor general get such an idea, and what could it mean about who he is as a person — and, more importantly, how did it affect the people he worked with?While we don't have answers to those first two, it's quite clear from the letter how Stone's gruesome asteroid "joke" affected him and his colleagues. Along with Webster's own concerns about Stone's violent state of mind and his fear that his family could be in danger, the assistant AG added that a female employee who had been present for that stomach-turning lunch discussion had been so upset by the topic that she excused herself — only to return to japes from others at the table who said she "couldn't handle people talking about dicks."That same woman "exhibited emotional distress" when recounting the anecdote to Webster, and also told him, through tears, that she had been sexually harassed on other occasions by Stone and was concerned about the way he treated women.When confronted with the sexual harassment allegations against him, Stone admitted to them all immediately, including the bizarre asteroid fantasy. He was, as 404 notes, given the grace to quit or be fired, and chose the former.More on Texas-based misogyny: In Leaked Text, Elon Musk Harangued Woman to Have as Many of His Babies as PossibleShare This Article
    0 Commentarii 0 Distribuiri
  • Trump, DEI and UK technology businesses

    Ever since Donald Trump returned to the White House earlier this year, diversity, equity and inclusioninitiatives have rarely been out of the headlines. One of Trump’s first acts as president was to sign two executive orders shutting down DEI programmes within the US federal government, with those government employees working in diversity roles placed on paid leave.  
    While Trump hasn’t yet targeted private sector DEI schemes, a recent Bloomberg analysis found that the top companies in the US S&P index had scaled back their DEI commitments since the president’s re-election. What does this mean for tech companies in the UK, and could a similar retreat happen in this country?
    While in the UK the Reform party has promised to ditch DEI initiatives from local government councils that it controls, a survey by Censuswide shortly after Trump’s election reported that almost three quarters of the more than 1000 organisations who responded were running DEI programmes, with more than a quarter planning to increase their budgets for these schemes in the coming year. These figures were backed up by a recent Ipsos survey that found widespread support among the UK public for a range of DEI initiatives such as flexible working arrangements, gender pay gap reporting and inclusivity training, with around two in five of those surveyed disapproving of Trump’s actions restricting DEI programmes in the US.

    So there doesn’t appear to be great public support in the UK for a retreat from DEI initiatives; quite the opposite in fact, and the legal landscape in this country in relation to diversity and inclusion issues is also significantly different to the US as a whole.  
    The Equality Act 2010 provides protection to UK employees from a range of different types of discrimination, including sex, race, disability and age, from the first day of employment and DEI programmes can be a very necessary step in helping companies prevent discrimination and defend these types of claims when they’re brought. Indeed, Employment Tribunals will often expect companies to have DEI policies in place as standard, particularly for larger employers, with the lack of such policies leaving companies much more exposed to successful discrimination claims. 
    The introduction last year of the obligation on employers to take reasonable steps to prevent sexual harassment has only increased the potential liabilities for companies who don’t take DEI issues seriously and having a clear and regularly updated policy specifically dealing with sexual harassment will be one of the first steps in demonstrating that the reasonable steps duty has been met. 
    The existing legal framework in the UK therefore limits companies from dramatically reducing their DEI commitments even if they wanted to and the current political climate is likely to mean that the obligations on employers in this area will only increase. 
    The Labour government is currently consulting on proposals to introduce ethnicity and disability pay-gap reporting, in addition to the existing gender pay-gap reporting requirements, along with pay transparency rules similar to those currently being introduced in the EU and bringing dual discriminationinto effect. These initiatives all show the degree to which the UK, as well as the EU more generally, is taking a different approach to DEI issues than the US at the moment.
    For those employees who value DEI programmes, the US retrenchment in this area provides UK employers, particularly IT companies, with a really strong opportunity to position themselves as an attractive option for global talent.  
    A commitment to DEI initiatives can be a genuine point of difference in attracting the best candidates, especially for younger employees as the recent Ipsos survey showed, and whereas the US might have been a key destination in the past for ex-pat IT employees, the UK could be well placed to close the gap in the next few years.  
    Nick Le Riche is a Partner in the Employment Law team at the international law firm Broadfield.

    on DEI
    How has US pushback affected UK DEI?
    The DEI backlash is over – we are talking a full scale revolt
    What companies are rolling back DEI policies in 2025?
    #trump #dei #technology #businesses
    Trump, DEI and UK technology businesses
    Ever since Donald Trump returned to the White House earlier this year, diversity, equity and inclusioninitiatives have rarely been out of the headlines. One of Trump’s first acts as president was to sign two executive orders shutting down DEI programmes within the US federal government, with those government employees working in diversity roles placed on paid leave.   While Trump hasn’t yet targeted private sector DEI schemes, a recent Bloomberg analysis found that the top companies in the US S&P index had scaled back their DEI commitments since the president’s re-election. What does this mean for tech companies in the UK, and could a similar retreat happen in this country? While in the UK the Reform party has promised to ditch DEI initiatives from local government councils that it controls, a survey by Censuswide shortly after Trump’s election reported that almost three quarters of the more than 1000 organisations who responded were running DEI programmes, with more than a quarter planning to increase their budgets for these schemes in the coming year. These figures were backed up by a recent Ipsos survey that found widespread support among the UK public for a range of DEI initiatives such as flexible working arrangements, gender pay gap reporting and inclusivity training, with around two in five of those surveyed disapproving of Trump’s actions restricting DEI programmes in the US. So there doesn’t appear to be great public support in the UK for a retreat from DEI initiatives; quite the opposite in fact, and the legal landscape in this country in relation to diversity and inclusion issues is also significantly different to the US as a whole.   The Equality Act 2010 provides protection to UK employees from a range of different types of discrimination, including sex, race, disability and age, from the first day of employment and DEI programmes can be a very necessary step in helping companies prevent discrimination and defend these types of claims when they’re brought. Indeed, Employment Tribunals will often expect companies to have DEI policies in place as standard, particularly for larger employers, with the lack of such policies leaving companies much more exposed to successful discrimination claims.  The introduction last year of the obligation on employers to take reasonable steps to prevent sexual harassment has only increased the potential liabilities for companies who don’t take DEI issues seriously and having a clear and regularly updated policy specifically dealing with sexual harassment will be one of the first steps in demonstrating that the reasonable steps duty has been met.  The existing legal framework in the UK therefore limits companies from dramatically reducing their DEI commitments even if they wanted to and the current political climate is likely to mean that the obligations on employers in this area will only increase.  The Labour government is currently consulting on proposals to introduce ethnicity and disability pay-gap reporting, in addition to the existing gender pay-gap reporting requirements, along with pay transparency rules similar to those currently being introduced in the EU and bringing dual discriminationinto effect. These initiatives all show the degree to which the UK, as well as the EU more generally, is taking a different approach to DEI issues than the US at the moment. For those employees who value DEI programmes, the US retrenchment in this area provides UK employers, particularly IT companies, with a really strong opportunity to position themselves as an attractive option for global talent.   A commitment to DEI initiatives can be a genuine point of difference in attracting the best candidates, especially for younger employees as the recent Ipsos survey showed, and whereas the US might have been a key destination in the past for ex-pat IT employees, the UK could be well placed to close the gap in the next few years.   Nick Le Riche is a Partner in the Employment Law team at the international law firm Broadfield. on DEI How has US pushback affected UK DEI? The DEI backlash is over – we are talking a full scale revolt What companies are rolling back DEI policies in 2025? #trump #dei #technology #businesses
    WWW.COMPUTERWEEKLY.COM
    Trump, DEI and UK technology businesses
    Ever since Donald Trump returned to the White House earlier this year, diversity, equity and inclusion (DEI) initiatives have rarely been out of the headlines. One of Trump’s first acts as president was to sign two executive orders shutting down DEI programmes within the US federal government, with those government employees working in diversity roles placed on paid leave.   While Trump hasn’t yet targeted private sector DEI schemes, a recent Bloomberg analysis found that the top companies in the US S&P index had scaled back their DEI commitments since the president’s re-election. What does this mean for tech companies in the UK, and could a similar retreat happen in this country? While in the UK the Reform party has promised to ditch DEI initiatives from local government councils that it controls, a survey by Censuswide shortly after Trump’s election reported that almost three quarters of the more than 1000 organisations who responded were running DEI programmes, with more than a quarter planning to increase their budgets for these schemes in the coming year. These figures were backed up by a recent Ipsos survey that found widespread support among the UK public for a range of DEI initiatives such as flexible working arrangements, gender pay gap reporting and inclusivity training, with around two in five of those surveyed disapproving of Trump’s actions restricting DEI programmes in the US. So there doesn’t appear to be great public support in the UK for a retreat from DEI initiatives; quite the opposite in fact, and the legal landscape in this country in relation to diversity and inclusion issues is also significantly different to the US as a whole.   The Equality Act 2010 provides protection to UK employees from a range of different types of discrimination, including sex, race, disability and age, from the first day of employment and DEI programmes can be a very necessary step in helping companies prevent discrimination and defend these types of claims when they’re brought. Indeed, Employment Tribunals will often expect companies to have DEI policies in place as standard, particularly for larger employers, with the lack of such policies leaving companies much more exposed to successful discrimination claims.  The introduction last year of the obligation on employers to take reasonable steps to prevent sexual harassment has only increased the potential liabilities for companies who don’t take DEI issues seriously and having a clear and regularly updated policy specifically dealing with sexual harassment will be one of the first steps in demonstrating that the reasonable steps duty has been met.  The existing legal framework in the UK therefore limits companies from dramatically reducing their DEI commitments even if they wanted to and the current political climate is likely to mean that the obligations on employers in this area will only increase.  The Labour government is currently consulting on proposals to introduce ethnicity and disability pay-gap reporting, in addition to the existing gender pay-gap reporting requirements, along with pay transparency rules similar to those currently being introduced in the EU and bringing dual discrimination (where discrimination is due to a combination of protected characteristics) into effect. These initiatives all show the degree to which the UK, as well as the EU more generally, is taking a different approach to DEI issues than the US at the moment. For those employees who value DEI programmes, the US retrenchment in this area provides UK employers, particularly IT companies, with a really strong opportunity to position themselves as an attractive option for global talent.   A commitment to DEI initiatives can be a genuine point of difference in attracting the best candidates, especially for younger employees as the recent Ipsos survey showed, and whereas the US might have been a key destination in the past for ex-pat IT employees, the UK could be well placed to close the gap in the next few years.   Nick Le Riche is a Partner in the Employment Law team at the international law firm Broadfield. Read more on DEI How has US pushback affected UK DEI? The DEI backlash is over – we are talking a full scale revolt What companies are rolling back DEI policies in 2025?
    0 Commentarii 0 Distribuiri
CGShares https://cgshares.com