• IT Pros ‘Extremely Worried’ About Shadow AI: Report

    IT Pros ‘Extremely Worried’ About Shadow AI: Report

    By John P. Mello Jr.
    June 4, 2025 5:00 AM PT

    ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday.
    The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint.
    “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report.
    “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld.
    Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval.
    “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained.
    “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.”
    Shadow AI Introduces Security Blind Spots
    Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
    “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.”
    “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif.
    “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.”
    Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways.
    “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.”
    Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said.
    “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.”
    Shadow AI Everywhere and Easy To Use
    Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees.
    “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.”
    “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.”
    Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.”
    “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.”
    “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif.
    “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.”
    Banning AI Tools Backfires
    To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned.
    “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.”
    In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said.
    “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.”
    “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.”
    He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved.
    “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in IT Leadership
    #pros #extremely #worried #about #shadow
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership #pros #extremely #worried #about #shadow
    WWW.TECHNEWSWORLD.COM
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT pros (46%) were “extremely worried” about shadow AI, and almost all of them (90%) were concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution [instantly]. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership
    Like
    Love
    Wow
    Sad
    Angry
    229
    0 Комментарии 0 Поделились 0 предпросмотр
  • What DEI actually does for the economy

    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI.

    Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants.

    These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy.

    Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law.

    Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy?

    As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research.

    Who gains from DEI?

    In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction.

    Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender.

    A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline.

    But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc.

    Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access.

    Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth.

    The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation.

    Ultimately, the U.S. economy is more profitable and productive as a result of immigrants.

    What does DEI cost?

    While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense.

    The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity.

    But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves.

    That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases.

    The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself.

    What lies ahead for DEI?

    People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement.

    Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women.

    At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively.

    Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually.

    In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence.

    Rodney Coates is a professor of critical race and ethnic studies at Miami University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #what #dei #actually #does #economy
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #what #dei #actually #does #economy
    WWW.FASTCOMPANY.COM
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated $7.5 billion on DEI programs. And in 2023, the federal government spent more than $100 million on DEI, including $38.7 million by the Department of Health and Human Services and another $86.5 million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of $1,005 compared with $1,202 for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at $976,800 and $1.2 million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy $16 trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by $2.7 trillion, added up to $113 billion in lifetime earnings through higher college enrollment, and generated $13 trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Комментарии 0 Поделились 0 предпросмотр
  • This AI-generated Fortnite video is a bleak glimpse at our future

    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize.

    Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.”

    Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt ShumerMay 21, 2025

    Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for:

    We got Veo 3 playing GTA 6 before we got GTA 6!pic.twitter.com/OM63yf0CKK— Sherveen MashayekhiMay 20, 2025

    Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic.

    Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker.

    That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch.

    Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source.

    “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to.

    Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow.

    Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch.

    But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it.
    #this #aigenerated #fortnite #video #bleak
    This AI-generated Fortnite video is a bleak glimpse at our future
    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize. Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.” Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt ShumerMay 21, 2025 Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for: We got Veo 3 playing GTA 6 before we got GTA 6!pic.twitter.com/OM63yf0CKK— Sherveen MashayekhiMay 20, 2025 Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic. Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker. That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch. Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source. “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to. Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow. Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch. But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it. #this #aigenerated #fortnite #video #bleak
    WWW.POLYGON.COM
    This AI-generated Fortnite video is a bleak glimpse at our future
    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize. Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.” Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt Shumer (@mattshumer_) May 21, 2025 Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for: We got Veo 3 playing GTA 6 before we got GTA 6!(what impresses me here is two distinct throughlines of audio: the guy, the game – prompt was 'a twitch streamer playing grand theft auto 6') pic.twitter.com/OM63yf0CKK— Sherveen Mashayekhi (@Sherveen) May 20, 2025 Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic. Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker. That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch. Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source. “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to. Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow. Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch. But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it.
    0 Комментарии 0 Поделились 0 предпросмотр
  • "One Big Beautiful Bill": House backs Trump plan to freeze state AI laws for a decade

    The big picture: The US House of Representatives narrowly approved President Donald Trump's "One Big Beautiful Bill," clearing the path for sweeping changes to the country's tax code and immigration policy. The bill also contains a contentious clause that blocks states from regulating artificial intelligence for the next 10 years.
    The moratorium applies not only to AI models but also to any products or services integrating AI, effectively banning and overriding state regulations in those areas. The restriction affects several critical sectors, including automotive, consumer IoT, social media, medical equipment, and more.
    Critics argue the clause could grant rogue developers a free pass to build AI systems that harm public safety, security, and well-being. They also contend that the bill undermines the federal system by restricting states from creating and enforcing regulations and impeding their right to self-governance.

    Some experts – and even Republican senators – warn that the bill could jeopardize national security and economic stability in ways not fully understood. Senators Marsha Blackburn of Tennessee and Josh Hawley of Missouri argue it will make it easier to create deepfakes and derail bipartisan efforts to confront AI-related threats.
    Non-profit advocacy groups, like the Electronic Frontier Foundation, have raised strong objections to the bill, calling it Big Tech's effort to dismantle guardrails around artificial intelligence. The group also urged Congress to reject what it described as a damaging proposal.
    Supporters of the bill argue that the moratorium is essential for US companies to compete with state-backed Chinese tech firms. They contend that regulations hinder innovation and could severely weaken America's chances of leading the world in artificial intelligence. Backers also describe the One Big Beautiful Bill as a "generational opportunity" to implement the long-term changes voters demanded.
    // Related Stories

    The bill still faces Senate approval before President Trump can sign it into law. However, political commentators across the spectrum believe Trump may struggle to convince Senators that limiting state-level legislation and infringing state sovereignty is the right approach. The outcome could have lasting implications for balancing power between federal and state governments, shaping how the country regulates emerging technologies.
    #quotone #big #beautiful #billquot #house
    "One Big Beautiful Bill": House backs Trump plan to freeze state AI laws for a decade
    The big picture: The US House of Representatives narrowly approved President Donald Trump's "One Big Beautiful Bill," clearing the path for sweeping changes to the country's tax code and immigration policy. The bill also contains a contentious clause that blocks states from regulating artificial intelligence for the next 10 years. The moratorium applies not only to AI models but also to any products or services integrating AI, effectively banning and overriding state regulations in those areas. The restriction affects several critical sectors, including automotive, consumer IoT, social media, medical equipment, and more. Critics argue the clause could grant rogue developers a free pass to build AI systems that harm public safety, security, and well-being. They also contend that the bill undermines the federal system by restricting states from creating and enforcing regulations and impeding their right to self-governance. Some experts – and even Republican senators – warn that the bill could jeopardize national security and economic stability in ways not fully understood. Senators Marsha Blackburn of Tennessee and Josh Hawley of Missouri argue it will make it easier to create deepfakes and derail bipartisan efforts to confront AI-related threats. Non-profit advocacy groups, like the Electronic Frontier Foundation, have raised strong objections to the bill, calling it Big Tech's effort to dismantle guardrails around artificial intelligence. The group also urged Congress to reject what it described as a damaging proposal. Supporters of the bill argue that the moratorium is essential for US companies to compete with state-backed Chinese tech firms. They contend that regulations hinder innovation and could severely weaken America's chances of leading the world in artificial intelligence. Backers also describe the One Big Beautiful Bill as a "generational opportunity" to implement the long-term changes voters demanded. // Related Stories The bill still faces Senate approval before President Trump can sign it into law. However, political commentators across the spectrum believe Trump may struggle to convince Senators that limiting state-level legislation and infringing state sovereignty is the right approach. The outcome could have lasting implications for balancing power between federal and state governments, shaping how the country regulates emerging technologies. #quotone #big #beautiful #billquot #house
    WWW.TECHSPOT.COM
    "One Big Beautiful Bill": House backs Trump plan to freeze state AI laws for a decade
    The big picture: The US House of Representatives narrowly approved President Donald Trump's "One Big Beautiful Bill," clearing the path for sweeping changes to the country's tax code and immigration policy. The bill also contains a contentious clause that blocks states from regulating artificial intelligence for the next 10 years. The moratorium applies not only to AI models but also to any products or services integrating AI, effectively banning and overriding state regulations in those areas. The restriction affects several critical sectors, including automotive, consumer IoT, social media, medical equipment, and more. Critics argue the clause could grant rogue developers a free pass to build AI systems that harm public safety, security, and well-being. They also contend that the bill undermines the federal system by restricting states from creating and enforcing regulations and impeding their right to self-governance. Some experts – and even Republican senators – warn that the bill could jeopardize national security and economic stability in ways not fully understood. Senators Marsha Blackburn of Tennessee and Josh Hawley of Missouri argue it will make it easier to create deepfakes and derail bipartisan efforts to confront AI-related threats. Non-profit advocacy groups, like the Electronic Frontier Foundation, have raised strong objections to the bill, calling it Big Tech's effort to dismantle guardrails around artificial intelligence. The group also urged Congress to reject what it described as a damaging proposal. Supporters of the bill argue that the moratorium is essential for US companies to compete with state-backed Chinese tech firms. They contend that regulations hinder innovation and could severely weaken America's chances of leading the world in artificial intelligence. Backers also describe the One Big Beautiful Bill as a "generational opportunity" to implement the long-term changes voters demanded. // Related Stories The bill still faces Senate approval before President Trump can sign it into law. However, political commentators across the spectrum believe Trump may struggle to convince Senators that limiting state-level legislation and infringing state sovereignty is the right approach. The outcome could have lasting implications for balancing power between federal and state governments, shaping how the country regulates emerging technologies.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Why Starbucks is banning orders under certain names in South Korea

    Starbucks in South Korea has barred customers from using the names of South Korea’s six presidential candidates in their orders ahead of next month’s presidential election.

    A Starbucks Korea spokesperson told NBC News the policy was introduced “in order to prevent inappropriate and abusive use of the names.”

    The decision comes as South Koreans have increasingly used their Starbucks’ orders to make a political statement—ordering via app under presidential candidates’ names, and using phrases in support of or to oppose them, forcing baristas to call them out for pickup, per NBC. Some examples of those orders include: “Arrest Yoon Suk Yeol” and “Lee Jae-myung is a spy,” per the BBC.

    According to Starbucks, the company needs to “maintain political neutrality during election season” and will lift the ban on June 3 after the election, the BBC reported.

    Like many South Korean businesses, Starbucks is seeking neutrality amid the charged political atmosphere around the election, stemming from former President Yoon Suk Yeol’s brief martial law declaration and subsequent impeachment trial, which has deeply divided the East Asian democracy.

    Similarly, Naver, South Korea’s biggest search engine, has disabled the autocomplete feature on searches for the candidates, a common practice for the tech giant during an election cycle, according to the BBC.

    The six presidential candidates’ names that Starbucks has banned are: Lee Jae-myung, from the country’s liberal Democratic Party; Kim Moon-soo, from former president Yoon Suk Yeol’ conservative People Power Party; and Lee Jun-seok, Kwon Young-kook, Hwang Kyo-ahn, and Song Jin-ho.

    As Fast Company previously reported, Starbucks recently posted “disappointing” earnings results for the second quarter of fiscal 2025 that ended on March 30. Unlike in the previous quarter, Starbucks did not beat analyst revenue expectations of billion and an adjusted earnings per shareof 49 cents, according to Yahoo Finance. Instead, the company posted a revenue of billion and an adjusted EPS of 41 cents. One key metric, U.S. comparable store sales, declined 2% in Q2.

    Shares in Starbucks Corporationwere trading up about 1% on Friday.
    #why #starbucks #banning #orders #under
    Why Starbucks is banning orders under certain names in South Korea
    Starbucks in South Korea has barred customers from using the names of South Korea’s six presidential candidates in their orders ahead of next month’s presidential election. A Starbucks Korea spokesperson told NBC News the policy was introduced “in order to prevent inappropriate and abusive use of the names.” The decision comes as South Koreans have increasingly used their Starbucks’ orders to make a political statement—ordering via app under presidential candidates’ names, and using phrases in support of or to oppose them, forcing baristas to call them out for pickup, per NBC. Some examples of those orders include: “Arrest Yoon Suk Yeol” and “Lee Jae-myung is a spy,” per the BBC. According to Starbucks, the company needs to “maintain political neutrality during election season” and will lift the ban on June 3 after the election, the BBC reported. Like many South Korean businesses, Starbucks is seeking neutrality amid the charged political atmosphere around the election, stemming from former President Yoon Suk Yeol’s brief martial law declaration and subsequent impeachment trial, which has deeply divided the East Asian democracy. Similarly, Naver, South Korea’s biggest search engine, has disabled the autocomplete feature on searches for the candidates, a common practice for the tech giant during an election cycle, according to the BBC. The six presidential candidates’ names that Starbucks has banned are: Lee Jae-myung, from the country’s liberal Democratic Party; Kim Moon-soo, from former president Yoon Suk Yeol’ conservative People Power Party; and Lee Jun-seok, Kwon Young-kook, Hwang Kyo-ahn, and Song Jin-ho. As Fast Company previously reported, Starbucks recently posted “disappointing” earnings results for the second quarter of fiscal 2025 that ended on March 30. Unlike in the previous quarter, Starbucks did not beat analyst revenue expectations of billion and an adjusted earnings per shareof 49 cents, according to Yahoo Finance. Instead, the company posted a revenue of billion and an adjusted EPS of 41 cents. One key metric, U.S. comparable store sales, declined 2% in Q2. Shares in Starbucks Corporationwere trading up about 1% on Friday. #why #starbucks #banning #orders #under
    WWW.FASTCOMPANY.COM
    Why Starbucks is banning orders under certain names in South Korea
    Starbucks in South Korea has barred customers from using the names of South Korea’s six presidential candidates in their orders ahead of next month’s presidential election. A Starbucks Korea spokesperson told NBC News the policy was introduced “in order to prevent inappropriate and abusive use of the names.” The decision comes as South Koreans have increasingly used their Starbucks’ orders to make a political statement—ordering via app under presidential candidates’ names, and using phrases in support of or to oppose them, forcing baristas to call them out for pickup, per NBC. Some examples of those orders include: “Arrest Yoon Suk Yeol” and “[opposition leader] Lee Jae-myung is a spy,” per the BBC. According to Starbucks, the company needs to “maintain political neutrality during election season” and will lift the ban on June 3 after the election, the BBC reported. Like many South Korean businesses, Starbucks is seeking neutrality amid the charged political atmosphere around the election, stemming from former President Yoon Suk Yeol’s brief martial law declaration and subsequent impeachment trial, which has deeply divided the East Asian democracy. Similarly, Naver, South Korea’s biggest search engine, has disabled the autocomplete feature on searches for the candidates, a common practice for the tech giant during an election cycle, according to the BBC. The six presidential candidates’ names that Starbucks has banned are: Lee Jae-myung, from the country’s liberal Democratic Party (DP); Kim Moon-soo, from former president Yoon Suk Yeol’ conservative People Power Party (PPP); and Lee Jun-seok, Kwon Young-kook, Hwang Kyo-ahn, and Song Jin-ho. As Fast Company previously reported, Starbucks recently posted “disappointing” earnings results for the second quarter of fiscal 2025 that ended on March 30. Unlike in the previous quarter, Starbucks did not beat analyst revenue expectations of $8.83 billion and an adjusted earnings per share (EPS) of 49 cents, according to Yahoo Finance. Instead, the company posted a revenue of $8.76 billion and an adjusted EPS of 41 cents. One key metric, U.S. comparable store sales, declined 2% in Q2. Shares in Starbucks Corporation (NASDAQ: SBUX) were trading up about 1% on Friday.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Overwatch 2’s new hero bans probably mean changes for Sombra, Zarya, and others

    Blizzard Entertainment rolled out its hero ban feature for Overwatch 2 in April, and if you’ve been playing any competitive games during season 16, you’ve probably noticed that certain heroes are among the most consistently banned characters. Newly published data from Blizzard shows that to be the case, with some qualifications, and it seems the frequency of banning those heroes will ultimately have some consequences. Some heroes may get rebalanced or more substantially changed, the developer says, partly as a result of their consistent bans.

    “Sombra leads the PC pack with an impressive 85% ban rate, followed by Zarya at 59% and Doomfist at 43%,” Blizzard said in a blog post. “On consoles, Sombra is banned even more, with a 93% rate. She’s followed by Zarya at 57% and Symmetra at 23%.”

    Gavin Winter, a senior systems designer at Blizzard, says that the past month’s worth of hero ban data will be used “to help inform balance changes” in Overwatch 2. “Heroes that are more niche or map specific may need to become more generalized if they’re always banned where they perform well,” Winter said. “Heroes that are very unpopular might see more adjustments regardless of performance. Ultimately though, there aren’t any specific changes in the works based on this data yet, and Hero Ban data alone would never dictate how Heroes are balanced.”

    Exactly what that means for characters like Zarya and Symmetra is obviously unclear, but if a character like Sombra is unavailable to play in 85% of competitive games of Overwatch 2 because players hate facing her, that’s a big problem. What’s the point of investing in a character if you can only play them competitively so rarely?

    That said, minus some tie-breaking decision-making quirks in the hero ban system, Blizzard is “happy with how it works overall.” Winter said, “We believe the system is meeting most of its goals, and we’re excited to see how Hero Bans evolve over time, especially once we start releasing more data about our Hero win rates and pick rates in the future!”

    Blizzard will give players more control over how matches play in season 17, due in late June, when it rolls out a new map-voting feature.
    #overwatch #new #hero #bans #probably
    Overwatch 2’s new hero bans probably mean changes for Sombra, Zarya, and others
    Blizzard Entertainment rolled out its hero ban feature for Overwatch 2 in April, and if you’ve been playing any competitive games during season 16, you’ve probably noticed that certain heroes are among the most consistently banned characters. Newly published data from Blizzard shows that to be the case, with some qualifications, and it seems the frequency of banning those heroes will ultimately have some consequences. Some heroes may get rebalanced or more substantially changed, the developer says, partly as a result of their consistent bans. “Sombra leads the PC pack with an impressive 85% ban rate, followed by Zarya at 59% and Doomfist at 43%,” Blizzard said in a blog post. “On consoles, Sombra is banned even more, with a 93% rate. She’s followed by Zarya at 57% and Symmetra at 23%.” Gavin Winter, a senior systems designer at Blizzard, says that the past month’s worth of hero ban data will be used “to help inform balance changes” in Overwatch 2. “Heroes that are more niche or map specific may need to become more generalized if they’re always banned where they perform well,” Winter said. “Heroes that are very unpopular might see more adjustments regardless of performance. Ultimately though, there aren’t any specific changes in the works based on this data yet, and Hero Ban data alone would never dictate how Heroes are balanced.” Exactly what that means for characters like Zarya and Symmetra is obviously unclear, but if a character like Sombra is unavailable to play in 85% of competitive games of Overwatch 2 because players hate facing her, that’s a big problem. What’s the point of investing in a character if you can only play them competitively so rarely? That said, minus some tie-breaking decision-making quirks in the hero ban system, Blizzard is “happy with how it works overall.” Winter said, “We believe the system is meeting most of its goals, and we’re excited to see how Hero Bans evolve over time, especially once we start releasing more data about our Hero win rates and pick rates in the future!” Blizzard will give players more control over how matches play in season 17, due in late June, when it rolls out a new map-voting feature. #overwatch #new #hero #bans #probably
    WWW.POLYGON.COM
    Overwatch 2’s new hero bans probably mean changes for Sombra, Zarya, and others
    Blizzard Entertainment rolled out its hero ban feature for Overwatch 2 in April, and if you’ve been playing any competitive games during season 16, you’ve probably noticed that certain heroes (e.g., Sombra, Zarya, Symmetra, Ana, Mercy) are among the most consistently banned characters. Newly published data from Blizzard shows that to be the case, with some qualifications, and it seems the frequency of banning those heroes will ultimately have some consequences. Some heroes may get rebalanced or more substantially changed, the developer says, partly as a result of their consistent bans. “Sombra leads the PC pack with an impressive 85% ban rate, followed by Zarya at 59% and Doomfist at 43%,” Blizzard said in a blog post. “On consoles, Sombra is banned even more, with a 93% rate. She’s followed by Zarya at 57% and Symmetra at 23%.” Gavin Winter, a senior systems designer at Blizzard, says that the past month’s worth of hero ban data will be used “to help inform balance changes” in Overwatch 2. “Heroes that are more niche or map specific may need to become more generalized if they’re always banned where they perform well,” Winter said. “Heroes that are very unpopular might see more adjustments regardless of performance. Ultimately though, there aren’t any specific changes in the works based on this data yet, and Hero Ban data alone would never dictate how Heroes are balanced.” Exactly what that means for characters like Zarya and Symmetra is obviously unclear, but if a character like Sombra is unavailable to play in 85% of competitive games of Overwatch 2 because players hate facing her, that’s a big problem. What’s the point of investing in a character if you can only play them competitively so rarely? That said, minus some tie-breaking decision-making quirks in the hero ban system, Blizzard is “happy with how it works overall.” Winter said, “We believe the system is meeting most of its goals, and we’re excited to see how Hero Bans evolve over time, especially once we start releasing more data about our Hero win rates and pick rates in the future!” Blizzard will give players more control over how matches play in season 17, due in late June, when it rolls out a new map-voting feature.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Epic Games Adresses AI-Generated Content Plaguing Fab

    Earlier this week, we reported on a Fab user who uploaded over 38,000 AI-generated assets over a short period of time, drawing attention to the broader problem of AI content flooding Epic Games' new marketplace. After Ronan Mahon's original Twitter post gained traction and the user's total uploads climbed to 41,000, Epic finally addressed the matter, issuing a low-profile, easy-to-miss statement on the Unreal Engine Forums.In his statement, Epic's Senior Director of Creator & Developer Experience, Sjoerd De Jong, apologized for the "degradation of the Fab experience everyone experienced," announcing that the user in question has been removed from Fab, along with a series of new rules the company plans to implement to prevent similar situations from happening again.Going forward, Epic will require creators to indicate during the publishing process whether an asset was created using generative AI, while also adding automated tools to help detect AI-generated assets and exploring spam-prevention systems, such as daily upload limits. Additionally, the studio is updating its content reporting form to give the community a clearer way to report assets suspected of being AI-made but not properly marked as Created With AI.De Jong also noted that users who wish to hide assets marked as Created With AI can do so through the content preferences menu, accessible via "the button found in the top right corner of search results."Unfortunately, even though many Fab users – as in, real flesh-and-blood human artists and game developers – expressed their desire for AI content to be removed entirely from the marketplace, or for Epic to follow Cubebrush's approach by making AI content accessible only via direct links and otherwise hidden from the platform, the director reaffirmed that AI will remain on Fab."We want Fab to be a welcoming place for creative expression, from first-time creations to advanced projects, and from original creations to AI-generated work," De Jong said. "Fab's goal is to be a place where creators can easily find what they're looking for, of the quality and style they're seeking, and be confident in their purchases."Due to this, along with the Created With AI filter not being toggled on by default, the reaction to the announcement was mixed, with some users praising Epic for the changes, while others criticized the company for taking a half-measure instead of banning AI entirely. "It's not welcoming, it's hurtful, and it's painful, and allowing AI is a slap in the face to everyone who works hard in their trade," one user commented, responding to De Jong's words.Read the full statement here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #epic #games #adresses #aigenerated #content
    Epic Games Adresses AI-Generated Content Plaguing Fab
    Earlier this week, we reported on a Fab user who uploaded over 38,000 AI-generated assets over a short period of time, drawing attention to the broader problem of AI content flooding Epic Games' new marketplace. After Ronan Mahon's original Twitter post gained traction and the user's total uploads climbed to 41,000, Epic finally addressed the matter, issuing a low-profile, easy-to-miss statement on the Unreal Engine Forums.In his statement, Epic's Senior Director of Creator & Developer Experience, Sjoerd De Jong, apologized for the "degradation of the Fab experience everyone experienced," announcing that the user in question has been removed from Fab, along with a series of new rules the company plans to implement to prevent similar situations from happening again.Going forward, Epic will require creators to indicate during the publishing process whether an asset was created using generative AI, while also adding automated tools to help detect AI-generated assets and exploring spam-prevention systems, such as daily upload limits. Additionally, the studio is updating its content reporting form to give the community a clearer way to report assets suspected of being AI-made but not properly marked as Created With AI.De Jong also noted that users who wish to hide assets marked as Created With AI can do so through the content preferences menu, accessible via "the button found in the top right corner of search results."Unfortunately, even though many Fab users – as in, real flesh-and-blood human artists and game developers – expressed their desire for AI content to be removed entirely from the marketplace, or for Epic to follow Cubebrush's approach by making AI content accessible only via direct links and otherwise hidden from the platform, the director reaffirmed that AI will remain on Fab."We want Fab to be a welcoming place for creative expression, from first-time creations to advanced projects, and from original creations to AI-generated work," De Jong said. "Fab's goal is to be a place where creators can easily find what they're looking for, of the quality and style they're seeking, and be confident in their purchases."Due to this, along with the Created With AI filter not being toggled on by default, the reaction to the announcement was mixed, with some users praising Epic for the changes, while others criticized the company for taking a half-measure instead of banning AI entirely. "It's not welcoming, it's hurtful, and it's painful, and allowing AI is a slap in the face to everyone who works hard in their trade," one user commented, responding to De Jong's words.Read the full statement here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #epic #games #adresses #aigenerated #content
    80.LV
    Epic Games Adresses AI-Generated Content Plaguing Fab
    Earlier this week, we reported on a Fab user who uploaded over 38,000 AI-generated assets over a short period of time, drawing attention to the broader problem of AI content flooding Epic Games' new marketplace. After Ronan Mahon's original Twitter post gained traction and the user's total uploads climbed to 41,000, Epic finally addressed the matter, issuing a low-profile, easy-to-miss statement on the Unreal Engine Forums.In his statement, Epic's Senior Director of Creator & Developer Experience, Sjoerd De Jong, apologized for the "degradation of the Fab experience everyone experienced," announcing that the user in question has been removed from Fab, along with a series of new rules the company plans to implement to prevent similar situations from happening again.Going forward, Epic will require creators to indicate during the publishing process whether an asset was created using generative AI, while also adding automated tools to help detect AI-generated assets and exploring spam-prevention systems, such as daily upload limits. Additionally, the studio is updating its content reporting form to give the community a clearer way to report assets suspected of being AI-made but not properly marked as Created With AI.De Jong also noted that users who wish to hide assets marked as Created With AI can do so through the content preferences menu, accessible via "the button found in the top right corner of search results."Unfortunately, even though many Fab users – as in, real flesh-and-blood human artists and game developers – expressed their desire for AI content to be removed entirely from the marketplace, or for Epic to follow Cubebrush's approach by making AI content accessible only via direct links and otherwise hidden from the platform, the director reaffirmed that AI will remain on Fab."We want Fab to be a welcoming place for creative expression, from first-time creations to advanced projects, and from original creations to AI-generated work," De Jong said. "Fab's goal is to be a place where creators can easily find what they're looking for, of the quality and style they're seeking, and be confident in their purchases."Due to this, along with the Created With AI filter not being toggled on by default, the reaction to the announcement was mixed, with some users praising Epic for the changes, while others criticized the company for taking a half-measure instead of banning AI entirely. "It's not welcoming, it's hurtful, and it's painful, and allowing AI is a slap in the face to everyone who works hard in their trade," one user commented, responding to De Jong's words.Read the full statement here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

    Weapon of choice?

    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

    Grok apparently wasn't an option.

    Ashley Belanger



    May 22, 2025 5:12 pm

    |

    19

    Credit:

    Anadolu / Contributor | Anadolu

    Credit:

    Anadolu / Contributor | Anadolu

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government.
    Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January."
    The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
    Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported.
    "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies."
    Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested.

    It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4.
    Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed.
    In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption."
    That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government.
    In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency.
    Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported.
    Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment.

    Why didn’t DOGE use Grok?
    It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.
    In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.
    "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place."
    Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers."
    A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer."
    "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."

    Ashley Belanger
    Senior Policy Reporter

    Ashley Belanger
    Senior Policy Reporter

    Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

    19 Comments
    #musks #doge #used #metas #llama
    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
    Weapon of choice? Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Grok apparently wasn't an option. Ashley Belanger – May 22, 2025 5:12 pm | 19 Credit: Anadolu / Contributor | Anadolu Credit: Anadolu / Contributor | Anadolu Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn’t DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high." Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 19 Comments #musks #doge #used #metas #llama
    ARSTECHNICA.COM
    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
    Weapon of choice? Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Grok apparently wasn't an option. Ashley Belanger – May 22, 2025 5:12 pm | 19 Credit: Anadolu / Contributor | Anadolu Credit: Anadolu / Contributor | Anadolu Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn’t DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high." Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 19 Comments
    0 Комментарии 0 Поделились 0 предпросмотр
  • Meta faces increasing scrutiny over widespread scam ads

    Published
    May 22, 2025 10:00am EDT close 'CyberGuy' praises Trump admin for Take It Down Act Kurt Knutsson joins "Fox & Friends" to discuss President Donald Trump's Take It Down Act and Meta avoiding removing scam ads from its platforms.  Meta, the parent company of Facebook and Instagram, is under fire after a major report revealed that thousands of fraudulent ads have been allowed to run on its platforms. According to the Wall Street Journal, Meta accounted for nearly half of all scam complaints tied to Zelle transactions at JPMorgan Chase between mid-2023 and mid-2024. Other banks have also reported a high number of fraud cases linked to Meta's platforms.JOIN THE FREE "CYBERGUY REPORT": GET MY EXPERT TECH TIPS, CRITICAL SECURITY ALERTS AND EXCLUSIVE DEALS, PLUS INSTANT ACCESS TO MY FREE "ULTIMATE SCAM SURVIVAL GUIDE" WHEN YOU SIGN UP! Meta logoWhy are scam ads so widespread?The problem of scam ads on Facebook has grown rapidly in recent years. Experts point to the rise of cryptocurrency schemes, AI-generated content and organized criminal groups operating from Southeast Asia. These scams range from fake investment opportunities to misleading product offers and even the sale of nonexistent puppies.FBI WARNS OF SCAM TARGETING VICTIMS WITH FAKE HOSPITALS AND POLICEOne example involves Edgar Guzman, a legitimate business owner in Atlanta, whose warehouse address was used by scammers in more than 4,400 Facebook and Instagram ads. These ads promised deep discounts on bulk merchandise, tricking people into sending money for products that never existed."What sucks is we have to break it to people that they've been scammed. We don't even do online sales," Guzman told reporters. Facebook login page on a laptopMeta's response: Is it enough?Meta says it's fighting back with new technology and partnerships, including facial-recognition tools and collaborations with banks and other tech companies. A spokesperson described the situation as an "epidemic of scams" and insisted that Meta is taking aggressive action, removing more than 2 million accounts linked to scam centers in several countries this year alone.However, insiders tell a different story. Current and former Meta employees say the company has been reluctant to make it harder for advertisers to buy ads, fearing it could hurt the company's bottom line. Staff reportedly tolerated between eight and 32 fraud "strikes" before banning accounts and scam enforcement was deprioritized to avoid losing ad revenue. META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM Instagram on a smartphoneThe human cost of inactionVictims of these scams often lose hundreds or even thousands of dollars. In one case, fake ads promised free spice racks from McCormick & Co. for just a small shipping fee, only to steal credit card details and rack up fraudulent charges. Another common scam involves fake puppy sales, with victims sending deposits for pets that never arrive. Some scam operations are even linked to human trafficking, with criminal groups forcing kidnapped victims to run online fraud schemes under threat of violence.Legal and ethical questions for MetaMeta maintains that it is not legally responsible for fraudulent content on its platforms, citing Section 230 of federal law, which protects tech companies from liability for user-generated content. In court filings, Meta has argued that it "does not owe a duty to users" when it comes to policing fraud. Meanwhile, a class-action lawsuit over allegedly inflated ad reach metrics is moving forward, putting even more pressure on Meta to address transparency and accountability.How to protect yourself from scam adsStaying safe online takes a little extra effort, but it's well worth it. Here are some steps you can follow to avoid falling victim to scam ads.1. Check the source and use strong antivirus software: Look for verified pages and official websites. Scammers often copy the names and logos of trusted brands, but the web address or page details may be off. Always double-check the URL for slight misspellings or extra characters and avoid clicking links in ads if you're unsure about their legitimacy.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.2. Be skeptical of deals that seem too good to be true: If an ad offers products at an unbelievable price or promises huge returns, pause and investigate before clicking. Scammers often use flashy discounts or urgent language to lure people in quickly. Take a moment to think before you act, and remember that if something sounds impossible, it probably is. 3. Research the seller: Search for reviews and complaints about the company or individual. If you can't find any credible information, it's best to avoid the offer. A quick online search can reveal if others have reported scams or had bad experiences, and legitimate businesses usually have a track record you can verify.4. Consider using a personal data removal service: There are companies that can help remove your personal info from data brokers and people-search sites. This means less of your data floating around for scammers to find and use. While these services usually charge a fee, they can save you a lot of time and hassle compared to doing it all yourself. Over time, you might notice fewer spam calls, emails and even a lower risk of identity theft. Check out my top picks for data removal services here.5. Never share sensitive information: Don't enter your credit card or bank details on unfamiliar sites. If you're asked for personal information, double-check the legitimacy of the request. Scammers may ask for sensitive data under the guise of "verifying your identity" or processing a payment, but reputable companies will never ask for this through insecure channels.6. Keep your devices updated: Keeping your software updated adds an extra layer of protection against the latest threats. Updates often include important security patches that fix vulnerabilities hackers might try to exploit. By regularly updating your devices, you help close those security gaps and keep your personal information safer from scammers and malware. 7. Report suspicious ads: If you see a scam ad on Facebook or Instagram, report it using the platform's tools. This helps alert others and puts pressure on Meta to take action. Reporting is quick and anonymous, and it plays a crucial role in helping platforms identify patterns and remove harmful content.8. Monitor your accounts: Regularly check your bank and credit card statements for unauthorized transactions, especially after making online purchases. Early detection can help you limit the damage if your information is compromised, and most banks have fraud protection services that can assist you if you spot something suspicious.By following these steps, you can better protect yourself and your finances from online scams. Staying alert and informed is your best defense in today's digital world.Kurt's key takeawaysThe mess with scam ads on Meta's platforms shows why it's important to look out for yourself online. Meta says it's working on the problem, but many people think it's not moving fast enough. By staying careful, questioning suspicious offers and using good security tools, you can keep yourself safer. Until the platforms step up their game, protecting yourself is the smartest move you can make.CLICK HERE TO GET THE FOX NEWS APPShould Meta be doing more to protect its users from scam ads, even if it means making changes that could affect its advertising revenue? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #meta #faces #increasing #scrutiny #over
    Meta faces increasing scrutiny over widespread scam ads
    Published May 22, 2025 10:00am EDT close 'CyberGuy' praises Trump admin for Take It Down Act Kurt Knutsson joins "Fox & Friends" to discuss President Donald Trump's Take It Down Act and Meta avoiding removing scam ads from its platforms.  Meta, the parent company of Facebook and Instagram, is under fire after a major report revealed that thousands of fraudulent ads have been allowed to run on its platforms. According to the Wall Street Journal, Meta accounted for nearly half of all scam complaints tied to Zelle transactions at JPMorgan Chase between mid-2023 and mid-2024. Other banks have also reported a high number of fraud cases linked to Meta's platforms.JOIN THE FREE "CYBERGUY REPORT": GET MY EXPERT TECH TIPS, CRITICAL SECURITY ALERTS AND EXCLUSIVE DEALS, PLUS INSTANT ACCESS TO MY FREE "ULTIMATE SCAM SURVIVAL GUIDE" WHEN YOU SIGN UP! Meta logoWhy are scam ads so widespread?The problem of scam ads on Facebook has grown rapidly in recent years. Experts point to the rise of cryptocurrency schemes, AI-generated content and organized criminal groups operating from Southeast Asia. These scams range from fake investment opportunities to misleading product offers and even the sale of nonexistent puppies.FBI WARNS OF SCAM TARGETING VICTIMS WITH FAKE HOSPITALS AND POLICEOne example involves Edgar Guzman, a legitimate business owner in Atlanta, whose warehouse address was used by scammers in more than 4,400 Facebook and Instagram ads. These ads promised deep discounts on bulk merchandise, tricking people into sending money for products that never existed."What sucks is we have to break it to people that they've been scammed. We don't even do online sales," Guzman told reporters. Facebook login page on a laptopMeta's response: Is it enough?Meta says it's fighting back with new technology and partnerships, including facial-recognition tools and collaborations with banks and other tech companies. A spokesperson described the situation as an "epidemic of scams" and insisted that Meta is taking aggressive action, removing more than 2 million accounts linked to scam centers in several countries this year alone.However, insiders tell a different story. Current and former Meta employees say the company has been reluctant to make it harder for advertisers to buy ads, fearing it could hurt the company's bottom line. Staff reportedly tolerated between eight and 32 fraud "strikes" before banning accounts and scam enforcement was deprioritized to avoid losing ad revenue. META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM Instagram on a smartphoneThe human cost of inactionVictims of these scams often lose hundreds or even thousands of dollars. In one case, fake ads promised free spice racks from McCormick & Co. for just a small shipping fee, only to steal credit card details and rack up fraudulent charges. Another common scam involves fake puppy sales, with victims sending deposits for pets that never arrive. Some scam operations are even linked to human trafficking, with criminal groups forcing kidnapped victims to run online fraud schemes under threat of violence.Legal and ethical questions for MetaMeta maintains that it is not legally responsible for fraudulent content on its platforms, citing Section 230 of federal law, which protects tech companies from liability for user-generated content. In court filings, Meta has argued that it "does not owe a duty to users" when it comes to policing fraud. Meanwhile, a class-action lawsuit over allegedly inflated ad reach metrics is moving forward, putting even more pressure on Meta to address transparency and accountability.How to protect yourself from scam adsStaying safe online takes a little extra effort, but it's well worth it. Here are some steps you can follow to avoid falling victim to scam ads.1. Check the source and use strong antivirus software: Look for verified pages and official websites. Scammers often copy the names and logos of trusted brands, but the web address or page details may be off. Always double-check the URL for slight misspellings or extra characters and avoid clicking links in ads if you're unsure about their legitimacy.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.2. Be skeptical of deals that seem too good to be true: If an ad offers products at an unbelievable price or promises huge returns, pause and investigate before clicking. Scammers often use flashy discounts or urgent language to lure people in quickly. Take a moment to think before you act, and remember that if something sounds impossible, it probably is. 3. Research the seller: Search for reviews and complaints about the company or individual. If you can't find any credible information, it's best to avoid the offer. A quick online search can reveal if others have reported scams or had bad experiences, and legitimate businesses usually have a track record you can verify.4. Consider using a personal data removal service: There are companies that can help remove your personal info from data brokers and people-search sites. This means less of your data floating around for scammers to find and use. While these services usually charge a fee, they can save you a lot of time and hassle compared to doing it all yourself. Over time, you might notice fewer spam calls, emails and even a lower risk of identity theft. Check out my top picks for data removal services here.5. Never share sensitive information: Don't enter your credit card or bank details on unfamiliar sites. If you're asked for personal information, double-check the legitimacy of the request. Scammers may ask for sensitive data under the guise of "verifying your identity" or processing a payment, but reputable companies will never ask for this through insecure channels.6. Keep your devices updated: Keeping your software updated adds an extra layer of protection against the latest threats. Updates often include important security patches that fix vulnerabilities hackers might try to exploit. By regularly updating your devices, you help close those security gaps and keep your personal information safer from scammers and malware. 7. Report suspicious ads: If you see a scam ad on Facebook or Instagram, report it using the platform's tools. This helps alert others and puts pressure on Meta to take action. Reporting is quick and anonymous, and it plays a crucial role in helping platforms identify patterns and remove harmful content.8. Monitor your accounts: Regularly check your bank and credit card statements for unauthorized transactions, especially after making online purchases. Early detection can help you limit the damage if your information is compromised, and most banks have fraud protection services that can assist you if you spot something suspicious.By following these steps, you can better protect yourself and your finances from online scams. Staying alert and informed is your best defense in today's digital world.Kurt's key takeawaysThe mess with scam ads on Meta's platforms shows why it's important to look out for yourself online. Meta says it's working on the problem, but many people think it's not moving fast enough. By staying careful, questioning suspicious offers and using good security tools, you can keep yourself safer. Until the platforms step up their game, protecting yourself is the smartest move you can make.CLICK HERE TO GET THE FOX NEWS APPShould Meta be doing more to protect its users from scam ads, even if it means making changes that could affect its advertising revenue? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #meta #faces #increasing #scrutiny #over
    WWW.FOXNEWS.COM
    Meta faces increasing scrutiny over widespread scam ads
    Published May 22, 2025 10:00am EDT close 'CyberGuy' praises Trump admin for Take It Down Act Kurt Knutsson joins "Fox & Friends" to discuss President Donald Trump's Take It Down Act and Meta avoiding removing scam ads from its platforms.  Meta, the parent company of Facebook and Instagram, is under fire after a major report revealed that thousands of fraudulent ads have been allowed to run on its platforms. According to the Wall Street Journal, Meta accounted for nearly half of all scam complaints tied to Zelle transactions at JPMorgan Chase between mid-2023 and mid-2024. Other banks have also reported a high number of fraud cases linked to Meta's platforms.JOIN THE FREE "CYBERGUY REPORT": GET MY EXPERT TECH TIPS, CRITICAL SECURITY ALERTS AND EXCLUSIVE DEALS, PLUS INSTANT ACCESS TO MY FREE "ULTIMATE SCAM SURVIVAL GUIDE" WHEN YOU SIGN UP! Meta logo (Kurt "CyberGuy" Knutsson)Why are scam ads so widespread?The problem of scam ads on Facebook has grown rapidly in recent years. Experts point to the rise of cryptocurrency schemes, AI-generated content and organized criminal groups operating from Southeast Asia. These scams range from fake investment opportunities to misleading product offers and even the sale of nonexistent puppies.FBI WARNS OF SCAM TARGETING VICTIMS WITH FAKE HOSPITALS AND POLICEOne example involves Edgar Guzman, a legitimate business owner in Atlanta, whose warehouse address was used by scammers in more than 4,400 Facebook and Instagram ads. These ads promised deep discounts on bulk merchandise, tricking people into sending money for products that never existed."What sucks is we have to break it to people that they've been scammed. We don't even do online sales," Guzman told reporters. Facebook login page on a laptop (Kurt "CyberGuy" Knutsson)Meta's response: Is it enough?Meta says it's fighting back with new technology and partnerships, including facial-recognition tools and collaborations with banks and other tech companies. A spokesperson described the situation as an "epidemic of scams" and insisted that Meta is taking aggressive action, removing more than 2 million accounts linked to scam centers in several countries this year alone.However, insiders tell a different story. Current and former Meta employees say the company has been reluctant to make it harder for advertisers to buy ads, fearing it could hurt the company's bottom line. Staff reportedly tolerated between eight and 32 fraud "strikes" before banning accounts and scam enforcement was deprioritized to avoid losing ad revenue. META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM Instagram on a smartphone (Kurt "CyberGuy" Knutsson)The human cost of inactionVictims of these scams often lose hundreds or even thousands of dollars. In one case, fake ads promised free spice racks from McCormick & Co. for just a small shipping fee, only to steal credit card details and rack up fraudulent charges. Another common scam involves fake puppy sales, with victims sending deposits for pets that never arrive. Some scam operations are even linked to human trafficking, with criminal groups forcing kidnapped victims to run online fraud schemes under threat of violence.Legal and ethical questions for MetaMeta maintains that it is not legally responsible for fraudulent content on its platforms, citing Section 230 of federal law, which protects tech companies from liability for user-generated content. In court filings, Meta has argued that it "does not owe a duty to users" when it comes to policing fraud. Meanwhile, a class-action lawsuit over allegedly inflated ad reach metrics is moving forward, putting even more pressure on Meta to address transparency and accountability.How to protect yourself from scam adsStaying safe online takes a little extra effort, but it's well worth it. Here are some steps you can follow to avoid falling victim to scam ads.1. Check the source and use strong antivirus software: Look for verified pages and official websites. Scammers often copy the names and logos of trusted brands, but the web address or page details may be off. Always double-check the URL for slight misspellings or extra characters and avoid clicking links in ads if you're unsure about their legitimacy.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.2. Be skeptical of deals that seem too good to be true: If an ad offers products at an unbelievable price or promises huge returns, pause and investigate before clicking. Scammers often use flashy discounts or urgent language to lure people in quickly. Take a moment to think before you act, and remember that if something sounds impossible, it probably is. 3. Research the seller: Search for reviews and complaints about the company or individual. If you can't find any credible information, it's best to avoid the offer. A quick online search can reveal if others have reported scams or had bad experiences, and legitimate businesses usually have a track record you can verify.4. Consider using a personal data removal service: There are companies that can help remove your personal info from data brokers and people-search sites. This means less of your data floating around for scammers to find and use. While these services usually charge a fee, they can save you a lot of time and hassle compared to doing it all yourself. Over time, you might notice fewer spam calls, emails and even a lower risk of identity theft. Check out my top picks for data removal services here.5. Never share sensitive information: Don't enter your credit card or bank details on unfamiliar sites. If you're asked for personal information, double-check the legitimacy of the request. Scammers may ask for sensitive data under the guise of "verifying your identity" or processing a payment, but reputable companies will never ask for this through insecure channels.6. Keep your devices updated: Keeping your software updated adds an extra layer of protection against the latest threats. Updates often include important security patches that fix vulnerabilities hackers might try to exploit. By regularly updating your devices, you help close those security gaps and keep your personal information safer from scammers and malware. 7. Report suspicious ads: If you see a scam ad on Facebook or Instagram, report it using the platform's tools. This helps alert others and puts pressure on Meta to take action. Reporting is quick and anonymous, and it plays a crucial role in helping platforms identify patterns and remove harmful content.8. Monitor your accounts: Regularly check your bank and credit card statements for unauthorized transactions, especially after making online purchases. Early detection can help you limit the damage if your information is compromised, and most banks have fraud protection services that can assist you if you spot something suspicious.By following these steps, you can better protect yourself and your finances from online scams. Staying alert and informed is your best defense in today's digital world.Kurt's key takeawaysThe mess with scam ads on Meta's platforms shows why it's important to look out for yourself online. Meta says it's working on the problem, but many people think it's not moving fast enough. By staying careful, questioning suspicious offers and using good security tools, you can keep yourself safer. Until the platforms step up their game, protecting yourself is the smartest move you can make.CLICK HERE TO GET THE FOX NEWS APPShould Meta be doing more to protect its users from scam ads, even if it means making changes that could affect its advertising revenue? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Ask Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Fortnite to bring back this ultra rare skin after five years

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Epic Games finally brought Fortnite back to iOS in the United States! The game was banned from Apple’s App Store for nearly five years, as players were stuck in Chapter 2 – Season 3. Now, it’s time for a new era, as the dispute between the two companies has nearly come to an end.
    To celebrate the return of Fortnite to iOS, Epic will bring back its iconic skin, Tart Tycoon. This anti-Apple skin marked the beginning of legal dispute between Apple and Epic, so it makes sense for the game creator to bring it back. According to a leak, the skin will be free once again.
    Fortnite is bringing back its ultra rare Tart Tycoon skin
    The trouble between Apple and Epic Games started brewing in August 2020 when the Fortnite developer added a third-party payment processor to the game. This resulted in Apple taking Fortnite off the App Store and banning it. As a result, Epic took the tech giant to court and began a lengthy legal battle that lasted for almost five years.
    Now, the game has returned to iPhones and iPads, but only in the US. Earlier this year, Epic brought it back in the European Union, and the United Kingdom is expected to get the game in the second half of 2025. According to Hypex, the most popular Fortnite leaker, Epic will bring back Tart Tycoon, one of the rarest skins of all time, to celebrate the return of the game.
    Tart Tycoon was first available in 2020. Image by VideoGamer
    The skin was released in August 2020, and players were able to win it by scoring 10 points in the #FreeFortnite Cup. It had unlimited quantities, so it was available to everyone who played a game or two. In addition to this, Epic Games also gave out some valuable rewards to more than 20,000 players, including a hat, Xbox One X, Nintendo Switch, phones, and more.
    The start date and time of the Tart Tycoon Cup haven’t been leaked or revealed yet. However, with the Galactic Battle season ending in early June, we expect it to come out within a week.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #fortnite #bring #back #this #ultra
    Fortnite to bring back this ultra rare skin after five years
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games finally brought Fortnite back to iOS in the United States! The game was banned from Apple’s App Store for nearly five years, as players were stuck in Chapter 2 – Season 3. Now, it’s time for a new era, as the dispute between the two companies has nearly come to an end. To celebrate the return of Fortnite to iOS, Epic will bring back its iconic skin, Tart Tycoon. This anti-Apple skin marked the beginning of legal dispute between Apple and Epic, so it makes sense for the game creator to bring it back. According to a leak, the skin will be free once again. Fortnite is bringing back its ultra rare Tart Tycoon skin The trouble between Apple and Epic Games started brewing in August 2020 when the Fortnite developer added a third-party payment processor to the game. This resulted in Apple taking Fortnite off the App Store and banning it. As a result, Epic took the tech giant to court and began a lengthy legal battle that lasted for almost five years. Now, the game has returned to iPhones and iPads, but only in the US. Earlier this year, Epic brought it back in the European Union, and the United Kingdom is expected to get the game in the second half of 2025. According to Hypex, the most popular Fortnite leaker, Epic will bring back Tart Tycoon, one of the rarest skins of all time, to celebrate the return of the game. Tart Tycoon was first available in 2020. Image by VideoGamer The skin was released in August 2020, and players were able to win it by scoring 10 points in the #FreeFortnite Cup. It had unlimited quantities, so it was available to everyone who played a game or two. In addition to this, Epic Games also gave out some valuable rewards to more than 20,000 players, including a hat, Xbox One X, Nintendo Switch, phones, and more. The start date and time of the Tart Tycoon Cup haven’t been leaked or revealed yet. However, with the Galactic Battle season ending in early June, we expect it to come out within a week. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #fortnite #bring #back #this #ultra
    WWW.VIDEOGAMER.COM
    Fortnite to bring back this ultra rare skin after five years
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games finally brought Fortnite back to iOS in the United States! The game was banned from Apple’s App Store for nearly five years, as players were stuck in Chapter 2 – Season 3. Now, it’s time for a new era, as the dispute between the two companies has nearly come to an end. To celebrate the return of Fortnite to iOS, Epic will bring back its iconic skin, Tart Tycoon. This anti-Apple skin marked the beginning of legal dispute between Apple and Epic, so it makes sense for the game creator to bring it back. According to a leak, the skin will be free once again. Fortnite is bringing back its ultra rare Tart Tycoon skin The trouble between Apple and Epic Games started brewing in August 2020 when the Fortnite developer added a third-party payment processor to the game. This resulted in Apple taking Fortnite off the App Store and banning it. As a result, Epic took the tech giant to court and began a lengthy legal battle that lasted for almost five years. Now, the game has returned to iPhones and iPads, but only in the US. Earlier this year, Epic brought it back in the European Union, and the United Kingdom is expected to get the game in the second half of 2025. According to Hypex, the most popular Fortnite leaker, Epic will bring back Tart Tycoon, one of the rarest skins of all time, to celebrate the return of the game. Tart Tycoon was first available in 2020. Image by VideoGamer The skin was released in August 2020, and players were able to win it by scoring 10 points in the #FreeFortnite Cup. It had unlimited quantities, so it was available to everyone who played a game or two. In addition to this, Epic Games also gave out some valuable rewards to more than 20,000 players, including a hat, Xbox One X, Nintendo Switch, phones, and more. The start date and time of the Tart Tycoon Cup haven’t been leaked or revealed yet. However, with the Galactic Battle season ending in early June, we expect it to come out within a week. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Комментарии 0 Поделились 0 предпросмотр
Расширенные страницы
CGShares https://cgshares.com