Reddit
Reddit
The heart of the internet, where millions gather for conversation and community.
  • 2 people like this
  • 253 Posts
  • 3 Photos
  • 0 Videos
  • 0 Reviews
  • News
Search
Recent Updates
  • WWW.THEGUARDIAN.COM
    OpenAI whistleblower who died was being considered as witness against company
    Suchir Balaji, a former OpenAI engineer and whistleblower who helped train the artificial intelligence systems behind ChatGPT and later said he believed those practices violated copyright law, has died, according to his parents and San Francisco officials. He was 26.Balaji worked at OpenAI for nearly four years before quitting in August. He had been well-regarded by colleagues at the San Francisco company, where a co-founder this week called him one of OpenAIs strongest contributors who was essential to developing some of its products.We are devastated to learn of this incredibly sad news and our hearts go out to Suchirs loved ones during this difficult time, said a statement from OpenAI.Balaji was found dead in his San Francisco apartment on 26 November in what police said appeared to be a suicide. No evidence of foul play was found during the initial investigation. The citys chief medical examiners office confirmed the manner of death to be suicide.His parents, Poornima Ramarao and Balaji Ramamurthy, said they are still seeking answers, describing their son as a happy, smart and brave young man who loved to hike and recently had returned from a trip with friends.Balaji grew up in the San Francisco Bay Area and first arrived at the fledgling AI research lab for a 2018 summer internship while studying computer science at the University of California, Berkeley. He returned a few years later to work at OpenAI, where one of his first projects, called WebGPT, helped pave the way for ChatGPT.Suchirs contributions to this project were essential, and it wouldnt have succeeded without him, said OpenAI co-founder John Schulman in a social media post memorializing Balaji. Schulman, who recruited Balaji to his team, said what had made him such an exceptional engineer and scientist was his attention to detail and ability to notice subtle bugs or logical errors.He had a knack for finding simple solutions and writing elegant code that worked, Schulman wrote. Hed think through the details of things carefully and rigorously.Balaji later shifted to organizing the huge datasets of online writings and other media used to train GPT-4, the fourth generation of OpenAIs flagship large language model and a basis for the companys famous chatbot. It was that work that eventually caused Balaji to question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.He first raised his concerns with the New York Times, which reported them in an October profile of Balaji.He later told the Associated Press he would try to testify in the strongest copyright infringement cases and considered a lawsuit brought by the New York Times last year to be the most serious. Times lawyers named him in an 18 November court filing as someone who might have unique and relevant documents supporting allegations of OpenAIs willful copyright infringement.His records were also sought by lawyers in a separate case brought by book authors including the comedian Sarah Silverman, according to a court filing.It doesnt feel right to be training on peoples data and then competing with them in the marketplace, Balaji told the AP in late October. I dont think you should be able to do that. I dont think you are able to do that legally.He told the AP that he had grown gradually more disillusioned with OpenAI, especially after the internal turmoil that led its board of directors to fire and then rehire the CEO, Sam Altman, last year. Balaji said he was broadly concerned about how its commercial products were rolling out, including their propensity for spouting false information known as hallucinations.But of the bag of issues he was concerned about, he said, he was focusing on copyright as the one it was actually possible to do something about.He acknowledged that it was an unpopular opinion within the AI research community, which is accustomed to pulling data from the internet, but said they will have to change and its a matter of time.He had not been deposed and its unclear to what extent his revelations will be admitted as evidence in any legal cases after his death. He also published a personal blog post with his opinions about the topic.Schulman, who resigned from OpenAI in August, said he and Balaji coincidentally left on the same day and celebrated with fellow colleagues that night with dinner and drinks at a San Francisco bar. Another of Balajis mentors, co-founder and chief scientist Ilya Sutskever, had left OpenAI several months earlier, which Balaji saw as another impetus to leave.Schulman said Balaji had told him earlier this year of his plans to leave OpenAI and that Balaji didnt think that better-than-human AI known as artificial general intelligence was right around the corner, like the rest of the company seemed to believe. The younger engineer expressed interest in getting a doctorate and exploring some more off-the-beaten-path ideas about how to build intelligence, Schulman said.Balajis family said a memorial is being planned for later this month at the India Community Center in Milpitas, California, not far from his hometown of Cupertino. In the US, you can call or text the National Suicide Prevention Lifeline at 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.orgThe Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the APs text archives.
    0 Comments 0 Shares 1 Views
  • WWW.THEDRIVE.COM
    US senators call out execs at Ford, GM, Tesla et al. for opposing right to repair The bipartisan group says automakers are hypocrites motivated by profits and not privacy protection
    Third Eye Images via Getty ImagesShareThe rich get richer, except this time Big Government is fighting for the little guy. Yes, you read that correctly and, no, it makes no sense to me either. Apparently, the exception is right-to-repair laws which are being pushed bywait for ita bipartisan effort that is literally scolding automakers to quit gatekeeping everything and give consumers access to parts, services, and their own personal vehicle data.According to Ars Technica, on December 19, U.S. senators sent letters to several automakers. Hardly love notes, the head honchos of Ford, General Motors, Honda, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen were put on blast and accused of being money-grabbing hypocrites. Okay, so, the pot is calling the kettle black, but the pot also isnt wrong.Led by Senators Jeff Merkley (D-OR), Elizabeth Warren (D-MA), and Josh Hawley (R-MO), the bipartisan letter notes that 70 percent of car parts and services currently come from independent outlets. At the same time, OEM-supported dealerships and suppliers are generally rated poorly, particularly on pricing and affordability.We need to hit the brakes on automakers stealing your data and undermining your right-to-repair, said Senator Merkley in a statement. Time and again, these billionaire corporations have a double standard when it comes to your privacy and security: claiming that sharing vehicle data with repair shops poses cybersecurity risks while selling consumer data themselves.Automakers argue that the right-to-repair movement poses a safety risk because an open data platform becomes a requirement, such as in Massachusetts. However, Big Auto says giving third-party access to what should be proprietary manufacturer data opens up a can of cybersecurity and privacy worms. The trade group formerly known as the Auto Alliance went so far as to create a scare campaign to dissuade voters, suggesting that data access would expose people to increased stalking and violence, especially women.Senators call BS, pointing out that OEMs already share sensitive vehicle and owner information with insurance companies and other third partiesas long as it benefits them. The bipartisan group says at least 37 auto companies have been identified as part of what amounts to a connected car collective whose focus is monetizing the vehicle data they claim must be kept close to the vest.Right-to-repair laws support consumer choice and prevent automakers from using restrictive repair laws to their financial advantage, reads the non-love letter. It is clear that the motivation behind automotive companies avoidance of complying with right-to-repair laws is not due to a concern for consumer security or privacy, but instead a hypocritical, profit-driven reaction.Whats next? Some holiday homework.Per the letter, automakers have a January 6 deadline to submit answers to a multi-part questionnaire. The senators ask how vehicle and driver data are collected, stored, secured, and shared. OEMs are also tasked with listing all the cybersecurity breaches within the last five years as well as fess up to their anti-right-to-repair lobbying, including the dollar amount spent for such efforts. So far, no automaker has publicly responded to the lawmakers letter or addressed its concerns.
    0 Comments 0 Shares 1 Views
  • WWW.TECHSPOT.COM
    AirPods sales totaled over $18 billion last year, more than all of Nintendo | Earbuds likely to become Apple's 3rd biggest product behind iPhone and Mac
    The big picture: Apple introduced AirPods eight years ago this month. Comparing a Bloomberg analysis with financial results from other companies reveals how staggeringly successful Apple's wireless earbuds have become. A testament to the scale of the world's most valuable company, the accessory takes in more revenue each year than several prominent tech companies. Bloomberg estimates that AirPods sales have exceeded $18 billion yearly since 2021. To put that number in perspective, it surpasses Nintendo's reported total net sales for 2023 (roughly $10 billion).Furthermore, PCMag recently calculated that AirPods generated more revenue than total annual earnings for companies like Spotify, eBay, Airbnb, DoorDash, and OpenAI. Although the numbers only represent revenue and don't reflect net profit, they indicate the rising importance of AirPods within the Cupertino giant's product lineup.Bloomberg projects that AirPods will likely begin outselling iPads before the decade's end, becoming Apple's third most lucrative product behind iPhones and Macs. Price is the primary factor, as AirPods are far cheaper than iPads and Macs.However, the high attachment rate of AirPods among iPhone owners also has a significant impact. Approximately 40 percent of iPhone users also used AirPods in 2022. Since there are about 1.5 billion active iPhones, that's 600 million Airpods users.The proportion could increase to 52 percent by 2027 and 60 percent by 2030, signifying a 12 percent yearly sales increase for the earbuds, assuming iPhone sales increase by 5 percent in that timeframe. AirPods ownership skews toward teenagers and young adults, as around 62 percent of Gen-Z customers between ages 18 and 24 own them. // Related StoriesInvestigating the frequency with which users lose and damage AirPods reveals another shocking statistic. According to CBS, customers spend over half a billion dollars each year replacing them. TechSpot staff can attest that the tiny buds are squirrelly, especially when the case hits a hard surface. The buds eject from the housing like bullets.AirPods initially launched as a pair of slightly above-average wireless earbuds seamlessly connecting to users' iPhones. Later models gained significantly expanded functionality. Apple released its fourth-generation AirPods in September, which include voice isolation and noise cancellation features. Furthermore, the second-generation AirPods Pro recently received FDA certification for use as over-the-counter hearing aids. At $249, they are more expensive than most earbuds but cheaper than most traditional hearing aids.Apple plans to release the third-generation AirPods Pro in 2025. Rumored features include a new design and improved noise management. Additional health-related functionality might come to subsequent models in 2026 and beyond.Image credit: Airpods Pro, AirPods Trusted Reviews
    0 Comments 0 Shares 1 Views
  • Albania bans TikTok for a year after killing of teenager
    submitted by /u/DomesticErrorist22 [link] [comments]
    0 Comments 0 Shares 2 Views
  • THECONVERSATION.COM
    Yes, I am a human: bot detection is no longer working and just wait until AI agents come along
    Auteurs Irfan Mehmood Associate Professor in Business Analytics and AI, University of Bradford Kamran Mahroof Associate Professor, Supply Chain Analytics, University of Bradford Dclaration dintrtsLes auteurs ne travaillent pas, ne conseillent pas, ne possdent pas de parts, ne reoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont dclar aucune autre affiliation que leur organisme de recherche.PartenairesUniversity of Bradford apporte des fonds en tant que membre fondateur de TheConversation UK.Voir les partenaires de TheConversation FranceYoure running late at the airport and need to urgently access your account, only to be greeted by one of those frustrating tests Select all images with traffic lights or Type the letters you see in this box. You squint, you guess, but somehow youre wrong. You complete another test but still the site isnt satisfied. Your flight is boarding now, the tannoy announces as the website gives you yet another puzzle. You swear at the screen, close your laptop and rush towards the gate. Now, heres a thought to cheer you up: bots are now solving these puzzles in milliseconds using artificial intelligence (AI). How ironic. The tools designed to prove were human are now obstructing us more than the machines theyre supposed to be keeping at bay. Welcome to the strange battle between bot detection and AI, which is set to get even more complicated in the coming years as technology continues to improve. So what does the future look like?Captcha, which stands for Completely Automated Public Turing test to tell Computers and Humans Apart, was invented in the early 2000s by a team of computer scientists at Carnegie Mellon University in Pittsburgh. It was a simple idea: get internet users to prove their humanity via tasks they can easily complete, but which machines find difficult. Machines were already causing havoc online. Websites were flooded with bots doing things like setting up fake accounts to buy up concert tickets, or posting automated comments to market fake Viagra or to entice users to take part in scams. Companies needed a way to stop this pernicious activity without losing legitimate users. The early versions of Captcha were basic but effective. Youd see wavy, distorted letters and type them into a box. Bots couldnt read the text the way humans could, so websites stayed protected. This went through several iterations in the years ahead: ReCaptcha was created in 2007 to add a second element in which you had to also key in a distorted word from an old book. Then in 2014 by now acquired by Google came reCaptcha v2. This is the one that asks users to tick the I am not a robot box and often choose from a selection of pictures containing cats or bicycle parts, or whatever. Still the most popular today, Google gets paid by companies who use the service on their website. Damn those bicycles.LilgrapherHow AI has outgrown the systemTodays AI systems can solve the challenges these Captchas rely on. They can read distorted text, so that the wavy or squished letters from the original Captcha tests are easy for them. Thanks to natural language processing and machine learning, AI can decode even the messiest of words. Similarly, AI tools such as Google Vision and OpenAIs Clip can recognise hundreds of objects faster and more accurately than most humans. If a Captcha asks an AI to click all the buses in a picture selection, they can solve it in fractions of a second, whereas it might take a human ten to 15 seconds. This isnt just a theoretical problem. Consider driving tests: waiting lists for tests in England are many months long, though you can get a much faster test by paying a higher fee to a black-market tout. The Guardian reported in July that touts commonly used automated software to book out all the test slots, while swapping candidates in and out to fit their ever-changing schedules. In an echo of the situation 20 years ago, there are similar issues with tickets for things such as football matches. The moment tickets become available, bots overwhelm the system bypassing Captchas, purchasing tickets in bulk and reselling them at inflated prices. Genuine users often miss out because they cant operate as quickly. Similarly, bots attack social media platforms, e-commerce websites and online forums. Fake accounts spread misinformation, post spam or grab limited items during sales. In many cases, Captcha is no longer able to stop these abuses. Whats happening now?Developers are continually coming up with new ways to verify humans. Some systems, like Googles ReCaptcha v3 (introduced in 2018), dont ask you to solve puzzles anymore. Instead, they watch how you interact with a website. Do you move your cursor naturally? Do you type like a person? Humans have subtle, imperfect behaviours that bots still struggle to mimic. Not everyone likes ReCaptcha v3 because it raises privacy issues plus the web company needs to assess user scores to determine who is a bot, and the bots can beat the system anyway. There are alternatives that use similar logic, such as slider puzzles that ask users to move jigsaw pieces around, but these too can be overcome. Slider Captcha:Some websites are now turning to biometrics to verify humans, such as fingerprint scans or voice recognition, while face ID is also a possibility. Biometrics are harder for bots to fake, but they come with their own problems privacy concerns, expensive tech and limited access for some users, say because they cant afford the relevant smartphone or cant speak because of a disability.The imminent arrival of AI agents will add another layer of complexity. It will mean we increasingly want bots to visit sites and do things on our behalf, so web companies will need to start distinguishing between good bots and bad bots. This area still needs a lot more consideration, but digital authentication certificates are proposed as one possible solution. In sum, Captcha is no longer the simple, reliable tool it once was. AI has forced us to rethink how we verify people online, and its only going to get more challenging as these systems get smarter. Whatever becomes the next technological standard, its going to have to be easy to use for humans, but one step ahead of the bad actors. So the next time you find yourself clicking on blurry traffic lights and getting infuriated, remember youre part of a bigger fight. The future of proving humanity is still being written, and the bots wont be giving up any time soon.
    0 Comments 0 Shares 2 Views
  • Amazon delays return-to-office mandate for thousands of workers due to space
    Amazon delays return-to-office mandate for thousands of workers due to space Dec. 19, 2024 at 6:38 am By Spencer Soper, Matt Day and JOHN GITTELSOHN Bloomberg Amazon.com wont have enough space for thousands of employees when they start returning to the office five days a week next month.The company recently told some personnel working in at least seven cities including Austin, Dallas and Phoenix that their return dates will be pushed back as much as four months, according to people familiar with the situation.Seattle-area officesIt was unclear on Thursday morning if any of Amazons Seattle-area offices will see return-to-office delays. In a statement, Amazon said for the vast majority of Amazonians, buildings will be ready on Jan. 2, but for some locations, there may be different timelines. The company said it is communicating directly with employees in those locations.The delay is the latest twist in a return-to-office saga that has roiled Amazons normally heads-down workforce. Some employees say theyre unhappy about being asked to come in full-time when many of their tech industry peers have more flexible work arrangements.Amazon employs more than 350,000 corporate employees worldwide mostly in the U.S. and its not clear precisely how many people are affected by the return-to-office delays. A company spokesperson said the vast majority of employees will have desks starting on Jan. 2. Employees in Dallas were recently told there wouldnt be sufficient space for them all to work five days a week in the office until March or April, one of the people said. Some workers in the companys Midtown Manhattan office in the Lord & Taylor building might not have space for full-time work until May, another person said. Amazon also notified employees in Atlanta, Nashville and Houston that it didnt have sufficient space for them all to return in January, Business Insider reported Monday.When Chief Executive Officer Andy Jassy announced the aggressive return-to-work mandate in September, he and other executives said it was necessary to nurture an eroding company culture. But some employees suspect the mandate is an effort to thin the ranks and avoid layoffs and severance payments. Amazon denies this.Employees say they have proved in recent years that teams can be effective while working remotely. Some of those affected by the RTO delay reacted with relief evidence that the five-day office mandate is widely unpopular.For more than a year, most Amazon employees have been asked to badge in three days a week, though there are exceptions for teams and fully remote positions. Amazon didnt have enough seats ready for that initial return-to-office plan, including in Bellevue where Amazon has focused much of its headquarters growth after building out its Seattle campus.Some workers say the company is still struggling to host people three days a week. In recent interviews, employees complained of working from shared desks, crowded corporate canteens and a lack of conference rooms for confidential calls or team meetings. The company has added a feature to its room reservation tool that requires workers to attest they actually plan to use the space, an apparent effort to crack down on squatters looking for a quiet place to work.Its not an ideal moment to be seeking new office space. While vacancies soared as remote work surged during the pandemic, theres now a shortage of the high-quality space typically leased by tech companies. Amazon has been leasing temporary space from WeWork in New York and Silicon Valley in recent weeks, a WeWork spokesperson confirmed.Coming out of the pandemic, Amazon froze hiring and tapped the brakes on its own real estate development, pausing high-profile office projects in Bellevue, Nashville and at the companys second headquarters campus in Arlington, Virginia. Some of those projects have since resumed and could eventually ease the strain.A spokesperson said that in most cases, the return-to-office delays are the result of reconfigurations of buildings that had been laid out to accommodate part-time remote workers, rather than a lack of available office space. This story was originally published at bloomberg.com. Read it here. The Seattle Times does not append comment threads to stories from wire services such as the Associated Press, The New York Times, The Washington Post or Bloomberg News. Rather, we focus on discussions related to local stories by our own staff. You can read more about our community policies here.
    0 Comments 0 Shares 2 Views
  • 0 Comments 0 Shares 1 Views
  • WWW.MSNBC.COM
    But his emails? Team Trumps private emails spark concerns Eight years after targeting Hillary Clinton's email protocols, Trump's transition team is relying on private servers instead of secure government accounts.
    Federal officials have spent years establishing and improving presidential transition processes, including making key resources available to incoming presidents and their teams. For example, as Donald Trump prepares to return the White House, he and his transition operation have been offered official government communications accounts including .gov email addresses to conduct official business.Politico reported, however, that the Republican president-elect and his team are overseeing a fully privatized operation, which is relying on private servers, laptops and cell phones instead of government-issued devices.Federal officials say theyre worried about sharing documents via email with Donald Trumps transition team because the incoming officials are eschewing government devices, email addresses and cybersecurity support, raising fears that they could potentially expose sensitive government data. The private emails have agency employees considering insisting on in-person meetings and document exchanges that they otherwise would have conducted electronically, according to two federal officials granted anonymity to discuss a sensitive situation. Their anxiety is particularly high in light of recent hacking attempts from China and Iran that targeted Trump, Vice President-elect JD Vance and other top officials.The Trump transition confirmed its use of private emails, with spokesperson Brian Hughes telling Politico that all transition business is conducted on a transition-managed email server. The outlet reported:We have implemented plans to communicate information securely as necessary, [Hughes] added, but declined to say what those plans entail. In a statement in late November, transition co-chair Susie Wiles similarly cited unspecified security and information protections the team has in place, arguing that they replace the need for additional government and bureaucratic oversight.Michael Daniel, a former White House cyber coordinator who now leads the nonprofit online security organization Cyber Threat Alliance, told Politico, I can assure you that the transition teams are targets for foreign intelligence collection. There are a lot of countries out there that want to know: What are the policy plans for the incoming administration?You probably know what Im going to write next. Im going to write it anyway.Younger readers might not fully appreciate the degree to which the 2016 presidential election focused on former Secretary of State Hillary Clintons email protocols. Voters were told in no uncertain terms that this was one of the defining political issues of our time.As Election Day 2016 approached, and the United States faced the prospect of having a television personality elected to the nations highest office, email was the one thing voters heard most about the more capable and more qualified candidate.The fact that Clinton did not rely entirely on her state.gov address, the electorate was told, was evidence of her recklessness. She put the United States at risk, the argument went, by mishandling classified materials. For some, it might even have been literally criminal culminating in Lock her up chants at Trump rallies.During the presidential campaign, then-House Speaker Paul Ryan went so far as to formally request that Clinton be denied intelligence briefings insisting that her email practices were proof that she mishandled classified information and therefore couldnt be trusted.When various observers including me said this was an outrageously foolish controversy, we received pushback from those who argued with great sincerity that this deserved to be an issue that dictated the outcome of one of the most important national elections in modern history.Clinton, of course, narrowly lost to Trump, who was later credibly accused by federal prosecutors of improperly taking classified materials to his glorified country club in Florida, before relying on the kind of private email servers that sparked anti-Clinton hysteria eight years ago.My point is not that Republicans have flip-flopped on the issue. Rather, the Trump-related developments serve as an example of insincerity.Its not that Trump and his party have changed their minds about the importance of email security and the hazards associated with eschewing official government accounts. The truth is simpler: They never actually cared about Clintons tech practices in the first place.It was simply a convenient line of attack, which has since outlived its usefulness.This post updates our related earlier coverage.
    0 Comments 0 Shares 1 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Digital twins of human organs are here. Theyre set to transform medical treatment.
    A healthy heart beats at a steady rate, between 60 and 100 times a minute. Thats not the case for all of us, Im reminded, as I look inside a cardboard box containing around 20 plastic heartseach a replica of a real human one. The hearts, which previously sat on a shelf in a lab in West London, were generated from MRI and CT scans of people being treated for heart conditions at Hammersmith Hospital next door. Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, created them on a 3D printer in his office. One of the hearts, printed in red recycled plastic, looks as I imagine a heart to look. It just about fits in my hand, and the chambers have the same dimensions as the ones you might see in a textbook. Perhaps it helps that its red. The others look enormous to me. One in particular, printed in black plastic, seems more than twice the size of the red one. As I find out later, the person who had the heart it was modeled on suffered from heart failure. The plastic organs are just for educational purposes. Niederer is more interested in creating detailed replicas of peoples hearts using computers. These digital twins are the same size and shape as the real thing. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on these virtual hearts, figuring out the best course of action for a patients condition. After decades of research, models like these are now entering clinical trials and starting to be used for patient care. Virtual replicas of many other organs are also being developed. Engineers are working on digital twins of peoples brains, guts, livers, nervous systems, and more. Theyre creating virtual replicas of peoples faces, which could be used to try out surgeries or analyze facial features, and testing drugs on digital cancers. The eventual goal is to create digital versions of our bodiescomputer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best. Theyd be our own personal guinea pigs for testing out medicines before we subject our real bodies to them. To engineers like Niederer, its a tantalizing prospect very much within reach. Several pilot studies have been completed, and larger trials are underway. Those in the field expect digital twins based on organs to become a part of clinical care within the next five to 10 years, aiding diagnosis and surgical decision-making. Further down the line, well even be able to run clinical trials on synthetic patientsvirtual bodies created using real data. But the budding technology will need to be developed carefully. Some worry about who will own this highly personalized data and how it could be used. Others fear for patient autonomywith an uncomplicated virtual record to consult, will doctors eventually bypass the patients themselves? And some simply feel a visceral repulsion at the idea of attempts to re-create humans in silico. People will say I dont want you copying me, says Wahbi El-Bouri, who is working on digital-twin technologies. They feel its a part of them that youve taken. Getting digital Digital twins are well established in other realms of engineering; for example, they have long been used to model machinery and infrastructure. The term may have become a marketing buzzword lately, but for those working on health applications, it means something very specific. We can think of a digital twin as having three separate components, says El-Bouri, a biomedical engineer at the University of Liverpool in the UK. The first is the thing being modeled. That might be a jet engine or a bridge, or it could be a persons heart. Essentially, its what we want to test or study. The second component is the digital replica of that object, which can be created by taking lots of measurements from the real thing and entering them into a computer. For a heart, that might mean blood pressure recordings as well as MRI and CT scans. The third is new data thats fed into the model. A true digital twin should be updated in real timefor example, with information collected from wearable sensors, if its a model of someones heart. Taking measurements of airplanes and bridges is one thing. Its much harder to get a continuous data feed from a person, especially when you need details about the inner functions of the heart or brain. And the information transfer should run both ways. Just as sensors can deliver data from a persons heart, the computer can model potential outcomes to make predictions and feed them back to a patient or health-care provider. A medical team might want to predict how a person will respond to a drug, for example, or test various surgical procedures on a digital model before operating in real life. By this definition, pretty much any smart device that tracks some aspect of your health could be considered a kind of rudimentary digital twin. You could say that an Apple Watch fulfills the definition of a digital twin in an unexciting way, says Niederer. It tells you if youre in atrial fibrillation or not. But the kind of digital twin that researchers like Niederer are working on is far more intricate and detailed. It could provide specific guidance on which disease risks a person faces, what medicines might be most effective, or how any surgeries should proceed. Were not quite there yet. Taking measurements of airplanes and bridges is one thing. Its much harder to get a continuous data feed from a person, especially when you need details about the inner functions of the heart or brain, says Niederer. As things stand, engineers are technically creating patient-specific models based on previously collected hospital and research data, which is not continually updated. The most advanced medical digital twins are those built to match human hearts. These were the first to be attempted, partly because the heart is essentially a pumpa device familiar to engineersand partly because heart disease is responsible for so much ill health and death, says El-Bouri. Now, advances in imaging technology and computer processing power are enabling researchers to mimic the organ with the level of fidelity that clinical applications require. Building a heart The first step to building a digital heart is to collect images of the real thing. Each team will have its own slightly different approach, but generally, they all start with MRI and CT scans of a persons heart. These can be entered into computer software to create a 3D movie. Some scans will also highlight any areas of damaged tissue, which might disrupt the way the electrical pulses that control heart muscle contraction travel through the organ. The next step is to break this 3D model down into tiny chunks. Engineers use the term computational mesh to describe the result; it can look like an image of the heart made up of thousands of 3D pieces. Each segment represents a small collection of cells and can be assigned properties based on how well they are expected to propagate an electrical impulse. Its all equations, says Natalia Trayanova, a biomedical engineering professor based at Johns Hopkins University in Baltimore, Maryland. This computer model of the human heart show how electrical signals pass through heart tissue. The model was created by Marina Strocchi, who works with Steven Niederer at Imperial College London.COURTESY OF MARINA STROCCHI As things stand, these properties involve some approximation. Engineers will guess how well each bit of heart works by extrapolating from previous studies of human hearts or past research on the disease the person has. The end result is a beating, pumping model of a real heart. When we have that model, you can poke it and prod it and see under what circumstances stuff will happen, says Trayanova. Her digital twins are already being trialed to help people with atrial fibrillation, a fairly common condition that can trigger an irregular heartbeattoo fast or all over the place. One treatment option is to burn off the bits of heart tissue responsible for the disrupted rhythm. Its usually left to a surgical team to figure out which bits to target. For Trayanova, the pokes and prods are designed to help surgeons with that decision. Scans might highlight a few regions of damaged or scarred tissue. Her team can then construct a digital twin to help locate the underlying source of the damage. In total, the tool will likely suggest two or three regions to destroythough in rare instances, it has shown many more, says Trayanova: They just have to trust us. So far, 59 people have been through the trial. More are planned. In cases like these, the models dont always need to be continually updated, Trayanova says. A heart surgeon might need to run simulations only to know where to implant a device, for example. Once that operation is over, no more data might be needed, she says. Quasi patients At his lab on the campus of Hammersmith Hospital in London, Niederer has also been building virtual hearts. He is exploring whether his models could be used to find the best place to implant pacemakers. His approach is similar to Trayanovas, but his models also incorporate ECG data from patients. These recordings give a sense of how electrical pulses pass through the heart tissue, he says. So far, Niederer and his colleagues have published a small trial in which models of 10 patients hearts were evaluated by doctors but not used to inform surgical decisions. Still, Niederer is already getting requests from device manufacturers to run virtual tests of their products. A couple have asked him to choose places where their battery-operated pacemaker devices can sit without bumping into heart tissue, he says. Not only can Niederer and his colleagues run this test virtually, but they can do it for hearts of various different sizes. The team can test the device in hundreds of potential locations, within hundreds of different virtual hearts. And we can do it in a week, he adds. This is an example of what scientists call in silico trialsclinical trials run on a computer. In some cases, its not just the trials that are digital. The volunteers are, too. El-Bouri and his colleagues are working on ways to create synthetic participants for their clinical trials. The team starts with data collected from real people and uses this to create all-new digital organs with a mishmash of characteristics from the real volunteers. These in silico trials could be especially useful for helping us figure out the best treatments for pregnant peoplea group that is notoriously excluded from many clinical trials. Specifically, one of El-Bouris interests is stroke, a medical emergency in which clots or bleeds prevent blood flow in parts of the brain. For their research, he and his colleagues model the brain, along with the blood vessels that feed it. You could create lots and lots of different shapes and sizes of these brains based on patient data, says El-Bouri. Once he and his team create a group of synthetic patient brains, they can test how these clots might change the flow of blood or oxygen, or how and where brain tissue is affected. They can test the impact of certain drugs, or see what might happen if a stent is used to remove the blockage. For another project, El-Bouri is creating synthetic retinas. From a starting point of 100 or so retinal scans from real people, his team can generate 200 or more synthetic eyes, just like that, he says. The trick is to figure out the math behind the distribution of blood vessels and re-create it through a set of algorithms. Now he is hoping to use those synthetic eyes in drug trialsamong other things, to find the best treatment doses for people with age-related macular degeneration, a common condition that can lead to blindness. These in silico trials could be especially useful for helping us figure out the best treatments for pregnant peoplea group that is notoriously excluded from many clinical trials. Thats for fear that an experimental treatment might harm a fetus, says Michelle Oyen, a professor of biomedical engineering at Wayne State University in Detroit. Oyen is creating digital twins of pregnancy. Its a challenge to get the information needed to feed the models; during pregnancy, people are generally advised to avoid scans or invasive investigations they dont need. Were much more limited in terms of the data that we can get, she says. Her team does make use of ultrasound images, including a form of ultrasound that allows the team to measure blood flow. From those images, they can see how blood flow in the uterus and the placenta, the organ that supports a fetus, might be linked to the fetuss growth and development, for example. For now, Oyen and her colleagues arent creating models of the fetuses themselvestheyre focusing on the fetal environment, which includes the placenta and uterus. A baby needs a healthy, functioning placenta in order to survive; if the organ starts to fail, stillbirth can be the tragic outcome. Oyen is working on ways to monitor the placenta in real time during pregnancy. These readings could be fed back to a digital twin. If she can find a way to tell when the placenta is failing, doctors might be able to intervene to save the baby, she says. I think this is a game changer for pregnancy research, she adds, because this basically gives us ways of doing research in pregnancy that [carries a minimal] risk of harm to the fetus or of harm to the mother. In another project, the team is looking at the impact of cesarean section scars on pregnancies. When a baby is delivered by C-section, surgeons cut through multiple layers of tissue in the abdomen, including the uterus. Scars that dont heal well become weak spots in the uterus, potentially causing problems for future pregnancies. By modeling these scars in digital twins, Oyen hopes to be able to simulate how future pregnancies might pan out, and determine if or when specialist care might be called for. Eventually, Oyen wants to create a full virtual replica of the pregnant uterus, fetus and all. But were not there yetwere decades behind the cardiovascular people, she says. Thats pregnancy research in a nutshell, she adds. Were always decades behind. Twinning Its all very well to generate virtual body parts, but the human body functions as a whole. Thats why the grand plan for digital twins involves replicas of entire people. Long term, the whole body would be fantastic, says El-Bouri. It may not be all that far off, either. Various research teams are already building models of the heart, brain, lungs, kidneys, liver, musculoskeletal system, blood vessels, immune system, eye, ear, and more. If we were to take every research group that works on digital twins across the world at the moment, I think you could put [a body] together, says El-Bouri. I think theres even someone working on the tongue, he adds. The challenge is bringing together all the various researchers, with the different approaches and different code involved in creating and using their models, says El-Bouri. Everything exists, he says. Its just putting it together thats going to be the issue. In theory, such whole-body twins could revolutionize health care. Trayanova envisions a future in which a digital twin is just another part of a persons medical recordone that a doctor can use to decide on a course of treatment. Technically, if someone tried really hard, they might be able to piece back who someone is through scans and twins of organs. Wahbi El-Bouri But El-Bouri says he receives mixed reactions to the idea. Some people think its really exciting and really cool, he says. But hes also met people who are strongly opposed to the idea of having a virtual copy of themselves exist on a computer somewhere: They dont want any part of that. Researchers need to make more of an effort to engage with the public to find out how people feel about the technology, he says. There are also concerns over patient autonomy. If a doctor has access to a patients digital twin and can use it to guide decisions about medical care, where does the patients own input come into the equation? Some of those working to create digital twins point out that the models could reveal whether patients have taken their daily meds or what theyve eaten that week. Will clinicians eventually come to see digital twins as a more reliable source of information than peoples self-reporting? Doctors should not be allowed to bypass patients and just ask the machine, says Matthias Braun, a social ethicist at the University of Bonn in Germany. There would be no informed consent, which would infringe on autonomy and maybe cause harm, he says. After all, we are not machines with broken parts. Two individuals with the same diagnosis can have very different experiences and lead very different lives. However, there are cases in which patients are not able to make decisions about their own treatmentfor example, if they are unconscious. In those cases, clinicians try to find a proxysomeone authorized to make decisions on the patients behalf. A digital psychological twin, trained on a persons medical data and digital footprint, could potentially act as a better surrogate than, for example, a relative who doesnt know the persons preferences, he says. If using digital twins in patient care is problematic, in silico trials can also raise issues. Jantina de Vries, an ethicist at the University of Cape Town, points out that the data used to create digital twins and synthetic quasi patients will come from people who can be scanned, measured, and monitored. This group is unlikely to include many of those living on the African continent, who wont have ready access to those technologies. The problem of data scarcity directly translates into technologies that are not geared to think about diverse bodies, she says. De Vries thinks the data should belong to the public in order to ensure that as many people benefit from digital-twin technologies as possible. Every record should be anonymized and kept within a public database that researchers around the world can access and make use of, she says. The people who participate in Trayanovas trials explicitly give me consent to know their data, and to know who they are [everything] about them, she says. The people taking part in Niederers research also provide consent for their data to be used by the medical and research teams. But while clinicians have access to all medical data, researchers access only anonymized or pseudonymized data, Niederer says. In some cases, researchers will also ask participants to consent to sharing their fully anonymized data in public repositories. This is the only data that companies are able to access, he adds: We do not share [our] data sets outside of the research or medical teams, and we do not share them with companies. El-Bouri thinks that patients should receive some form of compensation in exchange for sharing their health data. Perhaps they should get preferential access to medications and devices based on that data, he suggests. At any rate, [full] anonymization is tricky, particularly if youre taking patient scans to develop twins, he says. Technically, if someone tried really hard, they might be able to piece back who someone is through scans and twins of organs. When I looked at those anonymous plastic hearts, stored in a cardboard box tucked away on a shelf in the corner of an office, they felt completely divorced from the people whose real, beating hearts they were modeled on. But digital twins seem different somehow. Theyre animated replicas, digital copies that certainly appear to have some sort of life. People often think, Oh, this is just a simulation, says El-Bouri. But its a digital representation of an individual.
    0 Comments 0 Shares 1 Views
  • NYPOST.COM
    Google CEO Sundar Pichai says search giant has slashed manager roles by 10% in efficiency drive
    BusinessGoogle CEO Sundar Pichai says search giant has slashed manager roles by 10% in efficiency driveGoogle CEO Sundar Pichai reportedly said he has slashed a tenth of the search giants managerial roles since last year as part of a drive to become more efficient.In total, Google has reduced the number of managers, directors, and vice presidents within its workforce by 10%, Pichai said during an all-hands meeting on Wednesday.A Google spokesperson said the structural changes described by Pichai are designed have been rolling out since 2023 and did not represent job cuts beyond those previously reported.CEO Sundar Pichai said he has slashed a tenth of Googles managerial roles since last year in an effort to become more efficient. Getty ImagesSome managers were shifted to individual contributor roles meaning they are no longer responsible for other employees. An unspecified number of other managers were laid off, the spokesperson added.Insider was first to report on Pichais remarks. Shares of Google parent Alphabet were flat in Friday trading. Pichai had previously identified durable cost savings as one of Googles key goals for 2024. This year alone, Google has slashed hundreds of jobs across multiple divisions, including its ad sales team, its core engineering team and the hardware division responsible for devices such as the Pixel, Nest and Fitbit.The biggest round of cuts occurred in 2023, when Google cut some 12,000 employees in a major bloodletting.The restructuring has played out as Google attempts to compete with Sam Altmans OpenAI and other burgeoning rivals in the artificial intelligence sector.Google cut some 12,000 employees in 2023. AFP via Getty ImagesGoogle is also in the midst of several high-profile legal battles that could upend its business model including a looming breakup of its search business after a federal judge ruled it was a monopolist last August.Google is just one of many Big Tech giants that have slashed their workforces in recent months due to tightened economic conditions and a desire to shift more resources toward the artificial intelligence race.Mark Zuckerberg famously declared 2023 to be a year of efficiency at Meta while slashing tens of thousands of jobs at the Facebook and Instagram parent.Metas middle managers were reportedly told to shift to individual contributor roles or leave as part of what was reportedly described internally as flattening.
    0 Comments 0 Shares 2 Views
  • Lawmakers sound alarm over TSA facial recognition technology
    submitted by /u/Boonzies [link] [comments]
    0 Comments 0 Shares 2 Views
  • PROSPECT.ORG
    The Gov't Is Shutting Down Because Musk Has Factories In China
    The Government Is Shutting Down Because Elon Musk Has Factories in ChinaTheres a mundane reason for the late-term chaos, and its called a conflict of interest.by David Dayen December 20, 202412:00 PMRSSPrintIn a sense, Donald Trump is picking up where he left off. Most of us remember the last official act of his presidency as the Capitol Riot, but just before that, just before Christmas 2020, he inserted himself late into a government funding fight that he had been previously disinterested in. Congress had agreed to a bipartisan year-end omnibus spending bill that included the first COVID relief measures in nine months. The bills were already passed, until Trump decided that some of the spending sounded funny, and individuals should get $2,000 checks instead of the $600 on offer. He refused to sign the omnibus without them.Within hours, Democrats wrote an expanded checks bill and passed it through the House, but Mitch McConnell refused to let it advance, and Trump grudgingly signed the omnibus anyway, climbing all the way down. The $2,000 checks became an issue in two special elections in Georgia that Republicans lost. The road to the Biden agenda went through Trumps anger-fueled, failed gambit to renegotiate a congressional deal after it was complete.Almost four years to the day, were back here again. But this time, Trump is a side player in the show. He and his transition team reportedly had no problem with the 2024 version of a year-end spending bill until this week. Then Elon Musk starting posting into a frenzy about how a perfectly normal bipartisan agreement represented a total betrayal, lying about the contents in the process. Trump had to be roused to back up his co-president, getting House Speaker Mike Johnson (R-LA) to construct a partisan solution while inserting an eleventh-hour, two-year suspension of the debt limit to prevent the Republican trifecta from having to deal with that nuisance in the next Congress.The remaining gasps of the Tea Party right, who see the debt limit only as an opportunity to force spending cuts, refused to go along with that piece, with 38 of them opposing the Johnson bill on the floor yesterday. Democrats werent about to vote for a bill they had no say in (if the offer was to eliminate the debt limit for all time, they should go for it, but this is not that), and it failed. House Republicans are vowing to try again today, but they will likely need a two-thirds vote on anything today (its procedurally complicated; suffice to say that they cant wait for the Rules Committee to report out a rule, forcing a vote under suspension of those rules). That means any bill will need Democratic votes, and nothing suggests that there are any negotiations with Democrats afoot. So the government will shut down at midnight.That brings us back to the initial reason for the blowup: Elons endless scroll. Which appears to be tied to none of the inaccurate reasons he offered on X, but an old standby for billionaires: personal financial and business incentives. The original bill would have made it harder for Musk to build Tesla factories in Shanghai.The word for this is oligarchy, and oligarchs dont think about the country first.This is the first scandal of the second Trump term, and take a long look, because its going to look like all the other scandals: a conflict of interest among his impossibly wealthy advisers and aides (or from Trump himself) seeps over into policy.The measure at issue is known as the outbound investment provision. We have heard for years about the problem of manufacturing businesses shipping jobs overseas to China, with its low worker wages and low environmental standards. China typically forces businesses wanting to locate factories in its country to transfer their technology and intellectual property to Chinese firms, which can then use that to undercut competitors in global markets, with state support.Congress has been working itself into a lather about China for years now, and they finally came up with a way to deal with this issue. Sens. John Cornyn (R-TX) and Bob Casey (D-PA) have the flagship bill, which would either prohibit U.S. companies from investing in sensitive technologies in China, including semiconductors and artificial intelligence, or set up a broad notification regime around it.The bill would add some reporting requirements and enhanced reviews as well; in general, it expands restrictions that the Treasury Department has already put forward in regulatory rules. Codifying those rules into statute means that they cannot be changed by successive administrations.Cornyn-Casey passed the Senate last year, and after about a year of legislative wrangling, a final outbound investment package made it into the year-end bill. Were taking a necessary step to safeguard American innovation against bad actors and ensure our lasting dominance on the world stage, Cornyn said in a statement.Funny story: Elon Musks car company has a significant amount of, well, outbound investment. A Tesla Gigafactory in Shanghai opened in 2019; maybe a quarter of the companys revenue comes from China. Musk has endorsed building a second Tesla factory in China, where his grip on the electric-vehicle market has completely loosened amid domestic competition. He is working with the Chinese government to bring Full Self-Driving technology to China, in other words, importing a technology that may be seen as sensitive. Musk has battery and solar panel factories that are not yet in China, but he may want them there in the future.You can argue about whether the U.S. should be restricting investment in China. But its incontrovertible that a billionaire who has a bunch of investments in China and wants to make more all of a sudden disrupted a normal congressional process that was going to restrict that investment with a bunch of lies from his media platform. And lo and behold, when the new funding bill emerged, the outbound investment feature was dropped. In fact, all traces of provisions related to China were removed from the bill.Donald Trump preens as someone determined to get tough on China. But hes empowered someone with serious business entanglements in China to seemingly serve as a barrier to any policies related to China over the next four years.The current White House resident (remember him) picked up on this. In a statement, White House Press Secretary Karine Jean-Pierre noted that Republicans are breaking their word to support a bipartisan agreement that would lower prescription drug costs and make it harder to offshore jobs to China. The prescription drug reference has to do with a major reform of pharmacy benefit managers, which was also taken out of the new bill.So Donald Trump, alleged leader of the realignment of populist Republicans, scuttled a spending bill primarily to shield the richest man in the worlds investments in China and the profits of UnitedHealth Group, owners of the second-largest pharmacy benefit manager.This is going to be a constant theme of the next four years. Personal business interests are going to constantly take precedence over governance in the Trump/Musk White House. The word for this is oligarchy, and oligarchs dont think about the country first. Millions of federal employees, including service members, wont see paychecks over Christmas, national parks will be shut down, food inspections and countless other government functions will stop because a Elon Musk doesnt want anyone poking around his business in China.Happy New Year.Back to Search ResultsDavid DayenDavid Dayen is the Prospects executive editor. His work has appeared in The Intercept, The New Republic, HuffPost, The Washington Post, the Los Angeles Times, and more. His most recent book is Monopolized: Life in the Age of Corporate Power.Read more by David DayenDecember 20, 202412:00 PMUnlike many news organizations, the Prospect has remained staunchly committed to keeping our journalism free and accessible to all. We believe that independent journalism is crucial for a functioning democracybut quality reporting comes at a cost.This year, were aiming to raise $75,000 to continue delivering the hard-hitting investigative journalism youve come to expect from us. Your support helps us maintain our independence and dig deeper into the stories that matter most.If you value our reporting, please consider making a contribution today. Any amount helps secure our future and ensure we can continue holding power to account.
    0 Comments 0 Shares 29 Views
  • THEHILL.COM
    Pornhub to block access in Florida amid lawsuit over states age verification law
    State Watch Pornhub to block access in Florida amid lawsuit over states age verification lawby Ty Russell12/19/24 01:36 PM ET TAMPA, Fla. (WFLA) Pornhub says it will block access to its website in Florida as an adult entertainment advocacy group sues over the states new law requiring age verification.HB-3, an act relating to online protections for minors, will go into effect in the Sunshine State on New Years Day, requiring adult websites to prevent children from accessing them.Mike Stabile, a public policy director with Free Speech Coalition, a group that advocates for the adult entertainment industry, says he is concerned about the steps that will be used to verify someones age.When youre uploading an ID or when youre doing this type of verification, nothing is ever secure, Stabile said.Free Speech Coalition is the lead plaintiff in a federal lawsuit seeking to prevent the law form taking effect over privacy and free speech concerns.You are asking people who are legal adults to risk their privacy and risk possible surveillance to access the internet, Stabile said.Since 2022, 19 states have passed laws requiring age verification to access adult websites.Ian Corby, the director of global group Age Verification Providers Association, pushed back, saying personal information will be protected.The Florida law includes, explicitly, a requirement for anonymous age verification done by a third-party. Our entire industry was created to prove your age online and not have to disclose your identity, Corby said.Its a measure that was passed with overwhelmingly bipartisan support.All we are trying to do is to make the same laws apply in the online world as applied in the real world, Corby said.Florida Attorney General Ashley Moody is listed as the defendant in the case.As a mother, and Floridas Attorney General, I will fight aggressively in court to ensure the ability to protect Florida children, Moody said.Aylo, Pornhubs parent company, released a statement after vowing to block access to users statewide as a form of protest that read in part:First, to be clear, Aylo has publicly supported age verification of users for years, but we believe that any law to this effect must preserve user safety and privacy, and must effectively protect children from accessing content intended for adults.Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.Read the full lawsuit below
    0 Comments 0 Shares 30 Views
  • CFPB sues America's largest banks for 'allowing fraud to fester' on Zelle
    The Consumer Financial Protection Bureau is suing America's three largest banks, accusing the institutions of failing to protect customers from fraud on Zelle, the payment platform they co-own.According to the suit, which also targets Early Warning Services LLC, Zelle's official operator, Zelle users have lost more than $870 million over the networks seven-year existence due to these alleged failures.The nations largest banks felt threatened by competing payment apps, so they rushed to put out Zelle, said CFPB Director Rohit Chopra in a statement. By their failing to put in place proper safeguards, Zelle became a gold mine for fraudsters, while often leaving victims to fend for themselves.Among the charges: Poor identity verification methods, which have allowed bad actors to quickly create accounts and target Zelle users.Allowing repeat offenders to continue to gain access to the platformIgnoring and failing to report instances of fraudFailing to properly investigate consumer complaintsThe CFPBs suit seeks to change the platform's operations, as well as obtain a civil money penalty, that would be paid into the CFPBsvictims relief fund.A spokesperson for Zelle called the suit misguided and politically motivated.The CFPBs attacks on Zelle are legally and factually flawed, and the timing of this lawsuit appears to be driven by political factors unrelated to Zelle," Jane Khodos, Zelle spokesperson, said in an emailed statement. "Zelle leads the fight against scams and fraud and has industry-leading reimbursement policies that go above and beyond the law."In a follow-up statement, a Zelle spokesperson called the magnitude of CFPB's claims about customer losses due to fraud "misleading," adding that "many reported fraud claims are not found to involve actual fraud after investigation."A JPMorgan spokesperson echoed those sentiments, calling it "a last ditch effort in pursuit of their political agenda.""The CFPB is now overreaching its authority by making banks accountable for criminals, even including romance scammers," the bank said. "Its a stunning demonstration of regulation by enforcement, skirting the required rulemaking process.Rather than going after criminals, the CFPB is jeopardizing the value and free nature of Zelle, a trusted payments service beloved by our customers."A Bank of America spokesperson highlighted the importance of Zelle to everyday users. "We strongly disagree with the CFPBs effort to impose huge new costs on the 2,200 banks and credit unions that offer the free Zelle service to clients," said William Halldin in an emailed statement. "23 million Bank of America clients have embraced Zelle, regularly using it to send money to friends, family and people they trust."Via email, a Wells Fargo spokesperson declined to comment. Launched in 2017, Zelle allows users to send and receive money electronically. The platform has previously come in for criticism by Senate Democrats: Most recently, Sen. Richard Blumenthal, D-Connecticut, found customers had disputed over $372 million in scams and fraud in 2023 with nearly three-quarters of the claimed losses never reimbursed by the banks. In its statement regarding the CFPB suit, Early Warning saidreports of scams and fraud had decreased by nearly 50% in 2023, resulting in 99.95% of payments being sent without a report of scams and fraud.The CFPB has announced a number of measures this month designed to protect consumers amid threats to its continued existence from the incoming second Trump administration.
    0 Comments 0 Shares 30 Views
  • WWW.THEVERGE.COM
    Three of the biggest US banks are facing a lawsuit for widespread fraud on Zelle
    The Consumer Financial Protection Bureau (CFPB) has filed a lawsuit against Zelle and three banks that own it Wells Fargo, Bank of America, and JPMorgan Chase claiming they failed to protect consumers from widespread fraud. Zelle is a payment network designed to compete with payment platforms like Venmo and Cash App, but the CFPB says the banks rushed it to market, enabling fraud thats cost consumers more than $870 million since it launched in 2017.The lawsuit cites Zelles designs and features, including a limited identity verification process that involves assigning a token to a users email address or mobile phone number that they can use to verify their account with a one-time passcode. This setup makes it easier for scammers to take over accounts, as well as hide their own identities or pretend to be other institutions, the CFPB alleges.Some of the problems the CFPB cites in Zelles design. CFPB complaintOne of the most common Zelle scams involves bad actors impersonating a financial institution or a federal agency, who then trick customers into sending them money. After facing pressure from the CFPB, the banks backing Zelle started issuing refunds to victims of this type of scam last year. This latest lawsuit follows other CFPB actions to tighten regulation around digital wallet apps and payment networks.RelatedThe CFPB accuses Zelle and the banking trio of failing to track and quickly stop criminals on the platform, as they allegedly didnt relay information about known fraudulent transactions with other institutions in the payment network. It also alleges Bank of America, JPMorgan Chase, and Wells Fargo didnt properly address the risk of fraud despite the hundreds of thousands of complaints they received.Zelle pushed back on the lawsuit in a statement published on Friday. The CFPBs attacks on Zelle are legally and factually flawed, and the timing of this lawsuit appears to be driven by political factors unrelated to Zelle, Zelle spokesperson Jane Khodos said. The CFPBs misguided attacks will embolden criminals, cost consumers more in fees, stifle small businesses and make it harder for thousands of community banks and credit unions to compete.The CFPB is asking the court to stop Zelles parent company, Early Warning Services, and the banks from violating consumer protection laws, and compensate users, among other penalties.
    0 Comments 0 Shares 29 Views
  • 0 Comments 0 Shares 30 Views
  • NEWATLAS.COM
    Certain type of gaming provides substantial benefits to mental well-being
    Gamers who are free to interact with and explore a game world at their own pace are more relaxed and have improved mental well-being, according to new research. The findings could open the door to using gaming as a therapeutic tool to counter stress and anxiety.Video games and gaming. Theyve been the subject of much research over the years, with findings that run the gamut from gaming is bad to its good and back again. But, just like the wide-ranging results from studies into gaming, different games require different play styles.Researchers from Imperial College London in the UK and the University of Graz in Austria have examined the mental health benefits of playing open-world games, which are characterized by sprawling, detailed environments in which gameplay is not always linear and structured.In particular, in this study, we posit that open-world games with their expansive environments and opportunities for leisurely exploration may create a sense of escapism and relaxation, said the researchers. Previous work found that casual video game play may significantly reduce stress and improve mood, suggesting potential benefits for players of open-world games, which often offer similarly engaging yet nonpressuring experiences.Whether its Minecraft, Elder Scrolls 5: Skyrim, Assassins Creed: Valhalla, Elden Ring, Ghost of Tsushima, Red Dead Redemption 2, or Legend of Zelda: Tears of the Kingdom, open-world games are all about giving players freedom. The freedom to explore and to interact with their surroundings, to spend an afternoon honing a profession, or to pick up a bunch of side-quests and deviate from the main storyline. Importantly, with open-world games, players can do these things at their own pace.The self-directed playstyle of open-world games promotes a deeper connection with the game world, with a primary focus on exploration, the researchers explain. In contrast, competitive games, such as Fortnite, are structured around set objectives and a defined path The competitive nature drives a high level of excitement and urgency. Open-world games, in contrast, often emphasize player-driven experiences over predefined goals. This allows players to set their own objectives at their own pace and preference, whether it is building a new settlement, taming wild creatures, or mapping out uncharted territories.Being free to explore an immense world and its characters, such as with the PlayStation game 'Ghost of Tsushima,' can increase feelings of competence and satisfactionSonyTo investigate the relationship between open-world gaming and mental health, the researchers adopted a mixed methods approach, combining quantitative and qualitative data. The qualitative data they collected from in-depth interviews with gamer postgraduate students showed that the so-called cognitive escapism that immersive game worlds provided allowed players to temporarily disengage from real-life stressors, improving their mood and psychological well-being. Quantitative data analysis showed that cognitive escapism had a significant positive effect on players relaxation, which in turn had a significant positive effect on well-being.Well-being is a multifaceted construct that includes emotional, psychological, and social dimensions, the researchers said. The immersive experiences in open-world games can contribute to psychological well-being by fulfilling basic psychological needs, such as autonomy, competence, and relatedness, as described by self-determination theory. The autonomy offered by open-world games allows players to make choices and control their in-game actions, which can lead to increased feelings of competence and satisfaction.The study demonstrates that open-world games offer substantial benefits for cognitive escapism, significantly improving relaxation and well-being among postgraduate students, they explained. By providing immersive environments that allow mental diversion, emotional relief, and meaning, these games can serve as valuable tools for enhancing psychological and emotional health."The studys principal limitation, which the researchers acknowledge, is its reliance on self-reported data. They note optimistically, though, that while its a limitation for the present study, it allows future researchers to incorporate physiological measures to examine the effects of open-world gaming on mental health. And, they say, it doesnt diminish the importance of their findings or what they mean.Open-world games could be used as therapeutic tools for stress and anxiety management, offering a cost-effective and accessible method to improve mental health, they said. Developers should consider incorporating features that promote relaxation and cognitive escapism to enhance the well-being of players. The finding that open-world games may enhance peoples well-being through enhanced escapism and relaxation is not trivial, given the growing evidence that other forms of entertainment, such as traditional social media, contribute to adolescent anxiety and depression. We invite future research to build on the findings of this study and examine the role of open-world games in peoples lives further.The study was published in the Journal of Medical Internet Research.
    0 Comments 0 Shares 17 Views
  • 0 Comments 0 Shares 30 Views
  • WWW.THEVERGE.COM
    Senators rip into automakers for selling customer data and blocking right to repair
    A bipartisan group of senators is calling out the auto industry for its hypocritical, profit-driven opposition to national right-to-repair legislation, while also selling customer data to insurance companies and other third-party interests. In a letter sent to the CEOs of the top automakers, the trio of legislators Sens. Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Josh Hawley (R-MO) urge them to better protect customer privacy, while also dropping their opposition to state and national right-to-repair efforts. Right-to-repair laws support consumer choice and prevent automakers from using restrictive repair laws to their financial advantage, the senators write. It is clear that the motivation behind automotive companies avoidance of complying with right-to-repair laws is not due to a concern for consumer security or privacy, but instead a hypocritical, profit-driven reaction.Right-to-repair laws support consumer choice and prevent automakers from using restrictive repair laws to their financial advantage.For years, the right-to-repair movement has largely focused on consumer electronics, like phones and laptops. But lately, the idea that you should get to decide how and where to repair your own products has grown to include cars, especially as more vehicles on the road have essentially become giant computers on wheels. Along with that, automakers have taken to collecting vast amounts of data on their millions of customers, including driving habits, that they then turn around and sell to third-party data brokers. Earlier this year, The New York Times published an investigation into General Motors practice of providing microdetails about its customers driving habits, including acceleration, braking, and trip length, to insurance companies without their consent. Several states have passed right-to-repair laws in recent years, aiming to protect consumers from high prices and unscrupulous practices. In 2020, Massachusetts voters approved a ballot measure to give car owners and independent repair shops greater access to vehicle repair data. But automakers sued to block the law, and four years later, the law remains dormant. The auto industry claims to support right to repair.And some facts bear this out. For decades, small, independent auto body and repair shops flourished thanks to the idea that car maintenance is universal that anyone with a socket wrench and some grease can repair or modify their own vehicle. But as cars have become more connected, a lot of that work now relies on data and access to the digital information needed to diagnose and repair vehicles. And right-to-repair advocates, along with independent repair shops, are worried that major automakers are trying to kill their businesses by funneling all the work to their franchised dealerships, which typically cost more than the smaller garages. In the letter, Warren, Merkley, and Hawley demand that automakers drop their fierce opposition to these right-to-repair laws, calling it hypocritical and monopolistic. As the gatekeepers of vehicle parts, equipment, and data, automobile manufacturers have the power to place restrictions on the necessary tools and information for repairs, particularly as cars increasingly incorporate electronic components. This often leaves car owners with no other option than to have their vehicles serviced by official dealerships, entrenching auto manufacturers dominance and eliminating competition from independent repair shops.Automakers have raised cybersecurity concerns, including the specter of some bad actor remote hacking your car while driving it, as an excuse for fighting right-to-repair laws. But these concerns are based on speculative future risks rather than facts, the senators note. They cite a Federal Trade Commission study that found no empirical evidence backing up the auto industrys claims that independent shops would be more or less likely to compromise customer data than authorized ones. Its more likely that auto companies want to limit access to vehicle data for profit-driven reasons, the senators say. And that despite loudly proclaiming to care about cybersecurity, few companies actually comply with basic security standards when collecting, sharing, or selling consumer data. While carmakers have been fighting tooth and nail against right-to-repair laws that would require them to share vehicle data with consumers and independent repairers, they have simultaneously been sharing large amounts of sensitive consumer data with insurance companies and other third parties for profit often without clear consumer consent. In fact, some car companies use the threat of increased insurance costs to push consumers to opt into safe driving features, and then use those features to collect and sell the user data.The senators conclude by urging the auto CEOs to abandon their hypocritical opposition to right-to-repair laws, while also pressing them to answer a list of questions about their data-gathering practices. Were pushing these automakers to stop ripping Americans off, Warren said in a statement to The Verge. Americans deserve the right to repair their cars wherever they choose, and independent repair shops deserve a chance to compete with these giants.
    0 Comments 0 Shares 29 Views
  • WWW.NEWSWEEK.COM
    Tesla recalls 700,000 vehicles over tire pressure warning failure
    Tesla is recalling nearly 700,000 vehicles in the U.S. due to a malfunction in the tire pressure monitoring system (TPMS) that could fail to alert drivers to low tire pressure, increasing the risk of a crash.The National Highway Traffic Safety Administration (NHTSA) announced on Thursday that the recall affects specific models, including the 2024 Cybertruck, 20172025 Model 3, and 20202025 Model Y vehicles.The NHTSA said the issue involves TPMS warning light, which may fail to stay illuminated between drive cycles, preventing drivers from receiving a timely warning if their tire pressure is dangerously low.Driving with improperly inflated tires can lead to reduced vehicle control and a higher likelihood of accidents.A Tesla Cybertruck electric vehicle, Nov. 27, 2024, Santa Monica, California. Tesla is recalling nearly 700,000 vehicles due to a problem with the tire pressure monitoring system's warning light, among them, Cybertrucks. A Tesla Cybertruck electric vehicle, Nov. 27, 2024, Santa Monica, California. Tesla is recalling nearly 700,000 vehicles due to a problem with the tire pressure monitoring system's warning light, among them, Cybertrucks. Kirby Lee/AP Tesla said that the issue would be addressed with an over-the-air software update, a solution the company frequently uses to resolve vehicle problems.It added that owner notification letters will be mailed starting Feb. 15, 2025.In the meantime, Tesla customers can reach the company's support team or contact NHTSA's Vehicle Safety Hotline for further details.The latest recall marks another chapter in Tesla's ongoing recall activity in 2023.Earlier this year, the company recalled over 1.8 million vehicles in July due to a hood issue that could increase crash risk.In February, nearly 2.2 million Teslas were recalled because some dashboard warning lights were too small to be easily seen by drivers.Tesla has also faced multiple recalls related to its highly anticipated Cybertruck.The company's electric pickup, which made its long-awaited customer debut in November 2023, now has seven recalls under its belt.The most recent recall, issued in November, involved around 2,400 Cybertruck units.While these recalls raise concerns about quality control, Tesla's use of over-the-air updates has allowed the company to resolve many issues remotely.However, with the automaker's rapid expansion and new vehicle models hitting the road, including the Cybertruck, the frequency of recalls has garnered increased attention.This article contains additional reporting from The Associated Press
    0 Comments 0 Shares 4 Views
  • WWW.TECHSPOT.COM
    Amazon workers strike at seven US sites during year's busiest period
    Why it matters: It's not just corporate employees leaving over the company's aggressive return-to-office policy that Amazon has to worry about. Workers at seven of its facilities walked off the job this morning in what their union is calling the "largest strike" against Amazon in US history. According to the International Brotherhood of Teamsters, which represents 10,000 workers at ten Amazon facilities, warehouse workers in cities including New York, Atlanta, and San Francisco are taking part in the strike.The union had given Amazon a December 15 deadline to begin talks with employees, but the company has refused to negotiate contracts with unionized workers."If your package is delayed during the holidays, you can blame Amazon's insatiable greed. We gave Amazon a clear deadline to come to the table and do right by our members. They ignored it," Teamsters General President Sean M. O'Brien said in a statement."These greedy executives had every chance to show decency and respect for the people who make their obscene profits possible. Instead, they've pushed workers to the limit and now they're paying the price. This strike is on them."The prospect of not receiving your Amazon-bought goods in time for Christmas is certainly concerning. However, Amazon says that it does not expect the strike to impact its operations. A company spokesperson said the union continues to "intentionally mislead the public claiming that they represent 'thousands of Amazon employees and drivers'. They don't, and this is another attempt to push a false narrative."Amazon added that the Teamsters have threatened, intimidated, and attempted to coerce Amazon employees and third-party drivers. It says such actions are illegal and the subject of multiple pending unfair labor practice charges against the union. // Related StoriesStriking during Amazon's busiest period of the year will cause some headaches for the company and customers, but its unionized facilities make up only about 1% of Amazon's hourly workforce, and areas such as New York have multiple warehouses and smaller delivery depots, writes Reuters.Amazon has long faced accusations that its warehouse employees have to endure abusive and dangerous working conditions. The company continues to deny these claims, despite repeatedstrikes by staff.A US senate committee released a report on Amazon's warehouse safety practices on Sunday. It stated that at least two internal studies showed a link between the speed at which workers perform tasks and workplace injuries, but Amazon rejected many safety recommendations as it feared they may reduce productivity.Amazon said the report was "wrong on the facts and features selective, outdated information that lacks context and isn't grounded in reality."November saw the launch of the Make Amazon Pay campaign, in which Amazon workers and allies in more than 20 countries strike and protest against what it calls the company's anti-worker and anti-democratic practices. The actions, which took place over Black Friday weekend, are now in their fifth year.
    0 Comments 0 Shares 5 Views
  • LABORNOTES.ORG
    Cops bust picket line in New York as Teamsters strike at seven Amazon warehouses
    Amazon warehouse workers and delivery drivers at seven facilities in the metro areas of San Francisco, Chicago, Atlanta, Southern California, and New York City are out on strike today, in what the union says is the largest strike against Amazon in U.S. history. Unionized workers at Staten Islands JFK8 fulfillment center have also authorized a strike and could soon follow.Workers in all these locationsfive delivery stations and two fulfillment centershave already shown majority support and demanded union recognition. The Teamsters set Amazon an ultimatum: recognize the unions and agree to bargaining by December 15, or face strikes. Amazon hasnt moved.They are skirting their responsibility as our employer to bargain with us on higher pay and safer working conditions, said Riley Holzworth, a driver who makes deliveries from the DIL7 delivery station in Skokie, Illinois. At the DBK4 delivery station in Queens, New York, cops swarmed and arrested an Amazon driver who stopped his van in support of the strike. Then they forcibly broke the picket line. In anticipation of a possible strike at JFK8, police had camped out by the facility in advance.The Teamsters have made organizing Amazon a priority; the New York Times reported that the union has committed $8 million to the project, plus access to its $300 million strike fund.ALL YOU CAN THINK OF IS SLEEPThe strikes timing is strategic: package volumes balloon around the holidays, known as peak season, so its no easy feat for Amazon to cope with disruption. During the 2023 holiday season, Amazon netted 29 percent of all global online orders.To keep up with the surge in demand, many workers are forced to work mandatory overtimechildcare and other obligations be damned. They give us one day extra, plus one hour extra a day, said Wajdy Bzezi, a shift lead steward who has worked at JFK8 since 2018. I barely see my son. Whan you think of the holidays you think of spending time with your family, you think of reconnecting, said Ken Coates, a packer who has worked at JFK8 for five years. And during peak, all you can think of is sleep.To help meet the increased demand the company has hired 250,000 seasonal workers across the country. This influx could also dilute strike power, though seasonal workers face the same stressors and often support the union push.PEAK SEASON, INJURY SEASONRushed training for the seasonal hires has knock-on effects that leaves everyone less safe.Just this past month I think I ran into half a dozen new employees that didnt know how to do the job, Coates said. Not due to any fault of their own, due entirely to the fault of their trainer not giving them adequate time.For instance, Coates says, new workers assigned to rebin duties (moving items from the conveyor belt to a designated shelf so packers can package and ship them) can unintentionally push items too far across the shelf, where they fall off the other side and hit packers.Peak season at Amazon means peak injuries for workers. A July interim report from the Senates Health, Education, Labor and Pensions Committee found that injury rates skyrocket during Prime Day and the holiday season.During the week of Prime Day 2019, the report found, Amazons rate of recordable injuries would correspond to more than 10 annual injuries per 100 workersmore than double the industry average. During that same period, Amazons total rate of injuries (including those that do not need to be reported to the Occupational Safety and Health Administration, OSHA) would correspond to almost 45 injuries per 100 full-time workers. That is to say, if they kept up the Prime Day pace, nearly half the workers would be injured in a year.There hasnt been a year that Ive worked at Amazon where we havent broken a record in the number of packages weve handled, said Coates.IT DOESNT FEEL LIKE A JOB THAT SHOULD BE LEGALEven outside the busy season, the work is grueling. Amazons relentless productivity quotas are nearly impossible to meet safely, forcing workers to barter their backs and knees for $18 an hour.A new report from the same Senate committee has found that Amazons injury rate is having a significant and growing impact on the average injury rate for the entire warehouse sector.Amazon is a corporation that transports goods and breaks down bodies. And why wouldnt it, when this level of exploitation is incentivized at every turn? Reporting requirements are easily bypassed; the company appears to be using its on-site health facilities to obscure the true number of injuries sustained by workers on the job, or to shift the blame to workers for using improper technique.SUPPORT LABOR NOTESBECOME A MONTHLY DONORGive $10 a month or more and get our "Fight the Boss, Build the Union" T-shirt.I couldnt tell you how many times Ive been injured on the job, said Coates. In our bathroom theres a mirror that says, Youre looking at the person who is most responsible for your safety. It pisses me off every time I have to see it. Thats just them passing off the buck. The OSHA penalties for instances that do get reported are capped at around $16,000 for each serious violation, the report notes. For a company making $70,000 in profits per minute, thats just the cost of doing business. It doesnt feel like a job that should be legal, Holzworth said. Ive had a lot of different jobs in this industry, and this one by far feels like my employer is really getting away with a lot. A GLOBAL FIGHTWorkers organizing at key chokepoints in the supply chain have managed to extract a few concessions from Amazon, including increased pay for Chicago-area delivery station workers and the reinstatement of a suspended air hub employee in San Bernardino and another in Queens. But Amazon has made significant investments that reduce its vulnerability. The expansion of its fulfillment network allows the company to reroute orders within its network of warehouses and reduces its reliance on any one location in the event of strikes or disruptions. Building sufficient power to tip the scales will require organizing across the global supply chain.Around the world, the company has fiercely opposed organizing efforts, leaning on anti-union tactics like delaying elections, holding captive-audience meetings, and going on a hiring spree ahead of a union election to dilute the vote.Between 2022 and 2023, Amazon spent more than $17 million on union avoidance consultants. And where other companies are content to bring in these swindlers to train management, Amazon is sometimes cutting out the middleman and hiring them directly as managers.TAKE SOME ACCOUNTABILITYFor delivery drivers, theres another wrinkle: The drivers officially work for third-party contractors known as delivery service partners (DSPs), allowing Amazon to skirt responsibility.When drivers unionized last year at a DSP in California called Battle-Tested Strategies, Amazon ended its contract and cut tieswith the contractor, effectively firing the 84 drivers (Amazon was the companys only client, and the company hasnt operated since.)This year, Amazon pulled the same stunt when drivers organized at a DSP in Illinois, Four Star Express Delivery.Amazon maintains that since drivers are employed by DSPs, it has no duty to bargain with the workers. But drivers call bullshit, insisting that Amazon meets the legal standard for a joint employer: We drive your branded van, we wear your uniform, said Rubie Wiggins, a delivery driver at Amazons DAX5 facility in Southern California. Take some accountability. WE CAN BRING THEIR STANDARDS HERESafety is a central concernand a key organizing issue. Delivery vans are packed to the brim, forcing some drivers to jam packages behind seats and behind any available crevice.It looks like a crypt in your van, said Andrew Wiggins, Rubies husband, who works for the same DSP. A lot of drivers put packages on the dash, wherever they can. Its very unsafe, but people are just doing what they have to do.Rubie and Andrew talk regularly with UPS delivery drivers about the benefits of a strong union contract. Its amazing what you hear that they have, Rubie said. They have mechanics on site, they can watch their vehicles on site, we dont have any of that. When you see that UPS is less profitable than Amazon and theyre able to do that for their drivers, you really want to tell Amazon, Please take care of me like that. At Amazon its like, in order to perform, you have to think in your head a complete system of exact steps, Holzworth said. Im gonna organize my packages in this way and as soon as I stop, Im gonna engage the brake, pull out the keys, take off my seatbelt, in this order every single time so that youre wasting as few seconds as possible. If Amazon can have this as their business model, whats the future working conditions gonna look like for other corporations? Rubie Wiggins said. We have nieces and nephews, I have younger brothers. Whats the workforce gonna look like for them in a couple years?You get a lot of Why dont you work for UPS? she said. Were drivers already. We can bring their standards here. We can start making the working conditions better here.
    0 Comments 0 Shares 5 Views
  • WWW.FOXNEWS.COM
    Massive data breach at federal credit union exposes 240,000 members
    Tech Massive data breach at federal credit union exposes 240,000 members Find out what information has been compromised and how to stay safe Published December 19, 2024 10:00am EST close 'CyberGuy': This years most devastating data breaches Kurt Knutsson: Data breaches in 2024 exposed info of millions, stressing the need for better cybersecurity. SRP Federal Credit Union, a South Carolina-based financial institution, had a major data breach impacting more than 240,000 people.The credit union handles highly sensitive information of hundreds of thousands of Americans, which is now in the hands of cybercriminals.SRP revealed in a notice that the data breach was part of a two-month attack by hackers, raising concerns about how it took the company so long to detect unauthorized entry into its systems. I discuss the details of the data breach, its impact on people and what you need to do to stay safe. Illustration of a hacker at work (Kurt "CyberGuy" Knutsson)What you need to knowSRP Federal Credit Union has reported a data breach that exposed the personal information of more than 240,000 individuals, according to documents filed Friday with regulators inMaine andTexas.The company said it discovered suspicious activity on its network and notified law enforcement. An investigation determined that hackers accessed the credit unions systems between Sept. 5 and Nov. 4, potentially acquiring sensitive files. The investigation concluded on Nov. 22, the company said.SRP did not specify the exact details exposed in its notice to Maine regulators, saying only that names and government-issued identification were affected in the cyberattack.However, in a filing with Texas regulators, the company said names, Social Security numbers, drivers license numbers, dates of birth and financial information, including account numbers and credit or debit card numbers, were compromised. SRP said the breach did not affect its online banking or core processing systems. Illustration of a hacker at work (Kurt "CyberGuy" Knutsson)Whos responsible for the breachSRP has not disclosed who was behind the attack or the attackers' motives. However, the ransomware group Nitrogen claimed responsibility last week, alleging it had stolen 650 GB of customer data, according toThe Record. Ransomware attacks use malicious software to block access to a victims files, systems or networks and demand payment to restore access.The credit union could face legal challenges following the data breach, as Oklahoma City-based Murphy Law Firm isinvestigating claims on behalf of individuals whose personal information was exposed. The firm is also encouraging affected individuals to join a potential class-action lawsuit.SRP will provide impacted individuals with free-of-charge identity theft protection services, so take advantage of it to safeguard your information.We reached out to SRP for comment but did not hear back by our deadline.WHAT IS ARTIFICIAL INTELLIGENCE (AI)? A person working on their laptop (Kurt "CyberGuy" Knutsson)7 ways you can protect yourself from SRP data breachIf you have received a notice from SRP Federal Credit Union about the data breach, consider taking the following steps to protect yourself.1. Monitor your accounts: Regularly check your bank accounts, credit card statements and other financial accounts for any unauthorized transactions or suspicious activity. Contact one of the three major credit bureaus (Equifax, Experian or TransUnion) to place a fraud alert on your credit report, making it harder for identity thieves to open accounts in your name.2. Freeze your credit: Consider freezing your credit to prevent new accounts from being opened without your consent. This service is free and can be lifted at any time.GET FOX BUSINESS ON THE GO BY CLICKING HERE3. Use identity theft protection services: Consider enrolling in identity theft protection services that monitor your personal information and alert you to potential threats. These services can help you detect and respond to identity theft more quickly. Some identity theft protection services also offer insurance and assistance with recovering from identity theft, providing additional peace of mind.See my tips and best picks on how to protect yourself from identity theft.4. Change your passwords: Update passwords for your online accounts, especially those related to banking and email. Use strong, unique passwords and consider using apassword manager to generate and store complex passwords. Also,enable two-factor authentication for added security.5. Beware of phishing scams: Be cautious of emails, texts or calls claiming to be from SRP or related organizations. Avoid clicking on links or providing personal information unless you verify the sender.The best way to safeguard yourself from malicious links is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.Get my picks for the best 2024 antivirus protection winners for your Windows, Mac, Android and iOS devices.6. Keep your device's operating system updated:Make sure your cellphone and other devices automatically receive timely operating system updates. These updates often include important security patches that protect against new vulnerabilities exploited by hackers. For reference, see my guide onhow to keep all your devices updated.7. Invest in personal data removal services: Consider services that scrub your personal information from public databases. This reduces the chances of your data being exploited in phishing or other cyberattacks after a breach.Check out my top picks for data removal services here.Kurts key takeawayThe SRP Federal Credit Union data breach is a harsh reminder of how vulnerable our sensitive information can be. Over 240,000 individuals had their personal data compromised, including Social Security numbers, drivers licenses and financial details. Even more alarming is the two-month window hackers had to exploit the credit union's systems before being detected. This highlights significant gaps in cybersecurity protocols. If youre an SRP customer, monitor your accounts closely, enable fraud alerts and consider identity theft protection services to stay ahead of potential threats.CLICK HERE TO GET THE FOX NEWS APPDo you think financial institutions should be held more accountable for data breaches like this one? Let us know by writing us at Cyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most asked CyberGuy questions:New from Kurt:Copyright 2024 CyberGuy.com.All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurts free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Comments 0 Shares 5 Views
  • WWW.404MEDIA.CO
    DHS Says China, Russia, Iran, and Israel Are Spying on People in US with SS7
    The Department of Homeland Security (DHS) believes that China, Russia, Iran, and Israel are the primary countries exploiting security holes in telecommunications networks to spy on people inside the United States, which can include tracking their physical movements and intercepting calls and texts, according to information released by Senator Ron Wyden.The news provides more context around use of SS7, the exploited network and protocol, against phones in the country. In May, 404 Media reported that an official inside DHSs Cybersecurity Insurance and Security Agency (CISA) broke with his departments official narrative and publicly warned about multiple SS7 attacks on U.S. persons in recent years. Now, the newly disclosed information provides more specifics on where at least some SS7 attacks are originating from.The information is included in a letter the Department of Defense (DoD) wrote in response to queries from the office of Senator Wyden. The letter says that in September 2017 DHS personnel gave a presentation on SS7 security threats at an event open to U.S. government officials. The letter says that Wyden staff attended the event and saw the presentation. One slide identified the primary countries reportedly using telecom assets of other nations to exploit U.S. subscribers, it continues.
    0 Comments 0 Shares 12 Views
  • 0 Comments 0 Shares 12 Views
  • ARSTECHNICA.COM
    Companies issuing RTO mandates lose their best talent: Study
    S&P 500 study Companies issuing RTO mandates lose their best talent: Study Despite the risks, firms and Trump are eager to get people back into offices. Scharon Harding Dec 17, 2024 3:19 pm | 73 Credit: Getty Credit: Getty Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreReturn-to-office (RTO) mandates have caused companies to lose some of their best workers, a study tracking over 3 million workers at 54 "high-tech and financial" firms at the S&P 500 index has found. These companies also have greater challenges finding new talent, the report concluded.The paper, Return-to-Office Mandates and Brain Drain [PDF], comes from researchers from the University of Pittsburgh, as well as Baylor University, The Chinese University of Hong Kong, and Cheung Kong Graduate School of Business. The study, which was published in November, spotted this month by human resources publication HR Dive, and cites Ars Technica reporting, was conducted by collecting information on RTO announcements and sourcing data from LinkedIn. The researchers said they only examined companies with data available for at least two quarters before and after they issued RTO mandates. The researchers explained:To collect employee turnover data, we follow prior literature ... and obtain the employment history information of over 3 million employees of the 54 RTO firms from Revelio Labs, a leading data provider that extracts information from employee LinkedIn profiles. We manually identify employees who left a firm during each period, then calculate the firms turnover rate by dividing the number of departing employees by the total employee headcount at the beginning of the period. We also obtain information about employees gender, seniority, and the number of skills listed on their individual LinkedIn profiles, which serves as a proxy for employees skill level.There are limits to the study, however. The researchers noted that the study "cannot draw causal inferences based on our setting." Further, smaller firms and firms outside of the high-tech and financial industries may show different results. Although not mentioned in the report, relying on data from a social media platform could also yield inaccuracies, and the number of skills listed on a LinkedIn profile may not accurately depict a worker's skill level.Still, the study provides insight into how employees respond to RTO mandates and the effect it has on corporations and available talent at a time when entities like Dell, Amazon, and the US government are getting stricter about in-office work.Higher turnover ratesThe researchers concluded that the average turnover rates for firms increased by 14 percent after issuing return-to-office policies."We expect the effect of RTO mandates on employee turnover to be even higher for other firms" the paper says.The researchers included testing to ensure that the results stemmed from RTO mandates rather than time trends. For example, the researchers found that there were no significant increases in turnover rates during any of the five quarters prior to the RTO announcement quarter.Potentially alarming for employers is the study finding that senior and skilled employees were more likely to leave following RTO mandates. This aligns with a study from University of Chicago and University of Michigan researchers published in May that found that Apple and Microsoft saw senior-level employee bases decrease by 5 percentage points and SpaceXa decrease of 5 percentage points. (For its part, Microsoft told Ars that the report did not align with internal data.)Senior employees are expected to be more likely to leave, the new report argues, because such workers have more connections with other companies" and have easier times finding new jobs. Further, senior, skilled employees are dissatisfied when management blames remote work for low productivity.Similarly, the report supports concerns from some RTO-resistant employees that back-to-office mandates have a disproportionate impact on certain groups, like women, which the researchers said show "more pronounced" attrition rates following RTO mandates:Importantly, the effect on female employee turnover is almost three times as high as that on male employees ... One possible reason for these results is that female employees are more affected by RTO mandates due to their greater family responsibilities, which increases their demand for workplace flexibility and work-life balance.Trouble finding talentRTO mandates also have a negative impact on companies' ability to find new employees, the study found. After examining over 2 million job postings, the researchers concluded that companies with RTO mandates take longer to fill job vacancies than before:On average, the time it takes for an RTO firm to fill its job vacancies increases by approximately 23 percent, and the hire rate decreases by 17 percent after RTO mandates.The researchers also found significantly higher hiring costs induced by RTO mandates and concluded that the findings combined suggest that firms lose their best talent after RTO mandates and face significant difficulties replacing them."The weakest form of managementRTO mandates can obviously drive away workers who prioritize work-life balance, avoiding commutes and associated costs, and who feel more productive working in a self-controlled environment. The study, however, points to additional reasons RTO mandates make some people quit.One reason cited is RTO rules communicating "a culture of distrust that encourages management through monitoring." The researchers noted that Brian Elliott, CEO at Work Forward and a leadership adviser, described this as the "weakest form of managementand one that drives down employee engagement" in a November column for MIT Sloan Management Review.Indeed, RTO mandates have led to companies like Dell performing VPN tracking, and companies like Amazon, Google, JP Morgan Chase, Meta, and TikTok reportedly tracking badge swipes, resulting in employee backlash.The new study also pointed to RTO mandates making employees question company leadership and management's decision-making abilities. We saw this with Amazon, when over 500 employees sent a letter to Amazon Web Services (AWS) CEO Matt Garman, saying that they were "appalled to hear the non-data-driven explanation you gave for Amazon imposing a five-day in-office mandate."Employees are also put off by the drama that follows an aggressive RTO policy, the report says:An RTO announcement can be a big and sudden event that is distasteful to most employees, especially when the decision has not been well communicated, potentially triggering an immediate response of employees searching for and switching to new jobs.After Amazon announced it would kill remote work in early 2025, a study by online community Blind found that 73 percent of 2,285 Amazon employees surveyed were considering looking for another job in response to the mandate.A wave of voluntary terminationsThe paper points to reasons that employees may opt to stay with a company post-RTO mandates. Those reasons include competitive job markets, personal costs associated with switching jobs, loyalty, and interest in the collaborative and social aspects of working in-office.However, with the amount of evidence that RTO mandates drive employees away, some question if return-to-office mandates are subtle ways to reduce headcount without layoffs. Comments like AWS's Garman reportedly telling workers that if they don't like working in an office, "there are other companies around" have fueled this theory, as has Dell saying remote workers can't get promoted. A BambooHR survey of 1,504 full-time US employees, including 504 HR managers or higher, in March found that 25 percent of VP and C-suite executives and 18 percent of HR pros examined "admit they hoped for some voluntary turnover during an RTO."Yesterday, President-elect Donald Trump said he plans to do away with a deal that allowed the Social Security Administration's union to work remotely into 2029 and that those who don't come back into the office will "be dismissed." Similarly, Elon Musk and Vivek Ramaswamy, who Trump announced will head a new Department of Government Efficiency, wrote in a November op-ed that "requiring federal employees to come to the office five days a week would result in a wave of voluntary terminations that we welcome."Helen D. (Heidi) Reavis, managing partner at Reavis Page Jump LLP, an employment, dispute resolution, and media law firm, previously told Ars that employees "can face an array of legal consequences for encouraging workers to quit via their RTO policies." Still, RTO mandates are set to continue being a point of debate and tension at workplaces into the new year.Scharon HardingSenior Product ReviewerScharon HardingSenior Product Reviewer Scharon is Ars Technicas Senior Product Reviewer writing news, reviews, and analysis on consumer technology, including laptops, mechanical keyboards, and monitors. Shes based in Brooklyn. 73 Comments
    0 Comments 0 Shares 11 Views
  • WWW.NPR.ORG
    FBI warns Americans to keep their text messages secure: What to know
    TechnologyFBI warns Americans to keep their text messages secure: What to know The FBI and other agencies are encouraging people to use end-to-end encryption, citing what they say is a sustained hacking operation linked to China. In this 2021 photo, a smartphone's screen shows messaging apps including WhatsApp, Signal and Telegram. Damien Meyer/AFP via Getty Images hide captiontoggle caption Damien Meyer/AFP via Getty Images It's not often that aSnopes fact check. But the agency's urgent message this month to Americans, often summarized as "stop texting," surprised many consumers. The warning from the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) highlighted vulnerabilities in text messaging systems that millions of Americans use every day. The U.S. believes hackers affiliated with China's government, dubbed Salt Typhoon, are waging a "broad and significant cyber-espionage campaign" to infiltrate commercial telecoms and steal users' data and in isolated cases, to record phone calls, a senior FBI official who spoke to reporters on condition of anonymity said during a Dec. 3 briefing call. The new guidance may have surprised consumers but not security experts. "People have been talking about things like this for years in the computer security community," Jason Hong, a professor at Carnegie Mellon University's School of Computer Science, told NPR. "You should not rely on these kinds of unencrypted communications because of this exact reason: There could be snoopers in lots of infrastructure." So what should you do to keep your messages private? "Encryption is your friend" for texts and phone calls, Jeff Greene, CISA's executive assistant director for cybersecurity, said on the briefing call. "Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible, if not really hard, for them to detect it. So our advice is to try to avoid using plain text." In full end-to-end encryption, tech companies make a message decipherable only by its sender and receiver not by anyone else, including the company. It has been the default on WhatsApp, for instance, since 2016. Along with a promise of greater security, it makes companies "warrant-proof" from surveillance efforts. The good news for people who use Apple phones is that iMessage and FaceTime are also already end-to-end encrypted, says Hong. For Android phones, encryption is available in Google Messages if the senders and recipients all have the feature turned on. But messages sent between iPhones and Android phones are less secure. The simplest way to ensure your messages are safe from snooping is to use an end-to-end encrypted app like Signal or WhatsApp, says Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation (EFF). With these apps, "your communications are end-to-end encrypted every single time," she says. Galperin highlights another danger: A hacker who has managed to get your ID and password for a website can monitor your text messages to intercept a one-time passcode that's used in two-factor authentication (2FA). "This is a really serious security risk," Galperin says. She recommends getting 2FA messages through an app like Google Authenticator or Authy or by using a physical security key to verify access. The FBI and CISA also advise users to set their phones to update operating systems automatically. "Most compromises of systems do not involve taking advantage of vulnerabilities that no one else knows about," Galperin says, adding that "often, the maker of the product has in fact figured out what the vulnerability is, fixed it and pushed out a patch in the form of a security update." How at risk are you? You should be aware of your own "threat model" a core concept in computer security. Hong says it boils down to three questions: What are you trying to protect? How important is it to you? And what steps do you need to take to protect it? If the most valuable items on your phone are family photos, he says, you probably shouldn't worry about foreign hackers targeting you. But what if you occasionally text about national or corporate secrets or politically sensitive data? "If you are in business, if you are a journalist, if you are somebody in contact with democracy protesters in Hong Kong or Shenzhen or Tibet, then you might want to assume that your phone calls and text messages are not safe from the Chinese government," Galperin of the EFF says. Bad actors such as cybercriminals might have different objectives, Hong says, "but if you just do a few relatively simple things, you can actually protect yourself from the vast majority of those kinds of threats." What are the hackers doing? The FBI and CISA raised the alarm two months after The Wall Street Journal reported that hackers linked to the Chinese government have broken into systems that enable U.S. law enforcement agencies to conduct electronic surveillance operations under the Communications Assistance for Law Enforcement Act (CALEA). "These are for legitimate wiretaps that have been authorized by the courts," Hong says. But in hackers' hands, he says, the tools could potentially be used "to surveil communications and metadata for lots of people. And it seems like the [hackers'] focus is primarily Washington, D.C." The FBI says that the attack was far broader than the CALEA system and that the hackers are still accessing telecom networks. The U.S. has been working since late spring to determine the extent of their activities. This month, the Biden administration said at least eight telecommunications infrastructure companies in the U.S., and possibly more, had been broken into by Chinese hackers. The hackers stole a large amount of metadata, the FBI and CISA said. In far fewer cases, they said, the actual content of calls and texts was targeted. As agencies work to oust the hackers, the FBI called for Americans to embrace tight encryption an about-face, Galperin says, after years of insisting that law enforcement agencies need a "back door" to access communications. The agencies also want companies to bolster their security practices and work with the government to make their networks harder to compromise. "The adversaries we face are tenacious and sophisticated, and working together is the best way to ensure eviction," the senior FBI official said during the news briefing. As for the risk to everyday consumers, security experts like Hong and Galperin say that with vast amounts of information traveling between our phones, they want to see people get more help in protecting themselves. "I think it's really incumbent on software developers and these companies to have much better privacy and security by default," Hong says. "That way you don't need a Ph.D. to really understand all the options and to be secure."
    0 Comments 0 Shares 11 Views
  • ARSTECHNICA.COM
    Trump FCC chair wants to revoke broadcast licensesthe 1st Amendment might stop him | Brendan Carr backs Trump's war against media, but revoking licenses won't be easy.
    Speech police Trump FCC chair wants to revoke broadcast licensesthe 1st Amendment might stop him Brendan Carr backs Trump's war against media, but revoking licenses won't be easy. Jon Brodkin Dec 17, 2024 7:00 am | 95 President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn morePresident-elect Donald Trump's pick to lead the Federal Communications Commission, Brendan Carr, wants the FCC to crack down on news broadcasters that he perceives as being unfair to Trump or Republicans in general.Carr's stated goals would appear to mark a major shift in the FCC's approach to broadcasters. Carr's predecessors, including outgoing Chairwoman Jessica Rosenworcel and Republican Ajit Pai, who served in the first Trump administration, both rejected Trump's calls to punish news networks for alleged bias.Carr has instead embraced Trump's view that broadcasters should be punished for supposed anti-conservative bias. Carr has threatened to revoke licenses by wielding the FCC's authority to ensure that broadcast stations using public airwaves operate in the public interest, despite previous chairs saying the First Amendment prevents the FCC from revoking licenses based on content.Revoking licenses or blocking license renewals is difficult legally, experts told Ars. But Carr could use his power as FCC chair to pressure broadcasters and force them to undergo costly legal proceedings, even if he never succeeds in taking a license away from a broadcast station."Look, the law is very clear," Carr told CNBC on December 6. "The Communications Act says you have to operate in the public interest. And if you don't, yes, one of the consequences is potentially losing your license. And of course, that's on the table. I mean, look, broadcast licenses are not sacred cows."Carr fights Trumps battlesCarr has said his FCC will take a close look at a complaint regarding a CBS 60 Minutes interview with Kamala Harris before the election. Trump criticized the editing of the interview and said that "CBS should lose its license."In an interview with Fox News, Carr said there is "a news distortion complaint at the FCC still, having to do with CBS, and CBS has a transaction before the FCC." He was referring to a pending deal involving Skydance and Paramount, which owns and operates 28 local broadcast TV stations of the CBS Television Network."I'm pretty confident that news distortion complaint over the CBS 60 Minutes transcript is something that is likely to arise in the context of the FCC's review of that transaction," Carr said.Carr also alleged that NBC putting Harris on Saturday Night Live before the election was "a clear and blatant effort to evade the FCC's Equal Time rule," even though NBC gave Trump two free 60-second messages in order to comply with that rule. In Carr's CNBC interview on December 6, he raised the specter of imposing new rules for broadcasters and taking action against NBC over the Saturday Night Live episode."I don't want to be the speech police," Carr told CNBC. "But there is something that's different about broadcasters than, say, podcasters, where you have to operate in the public interest. So right now, all I'm saying is maybe we should start a rulemaking to take a look at what that means. There's other issues as well. Look, there's a news distortion complaint that's still hanging out there involving CBS, with NBC and SNL, we had some issues potentially with the Equal Time provision. I just think we need to sort of reinvigorate the FCC's approach to these issues, as Congress has envisioned."We emailed Carr with questions about his specific plans for challenging broadcasters' licenses and whether he still believes that NBC attempted to evade the Equal Time rule, but we did not receive a response.Carrs tough taskThe Carr FCC and Trump administration "can hassle the living daylights out of broadcasters or other media outlets in annoying ways," said Andrew Jay Schwartzman, who is senior counselor for the Benton Institute for Broadband & Society and previously led the nonprofit Media Access Project, a public interest telecommunications law firm. At the FCC, "you can harass, you can kind of single some broadcasters out, and you can hold up some of their applications," Schwartzman said in a phone interview with Ars.But that doesn't mean Carr can put broadcasters out of business. "They're not going to revoke licenses. It's just legally just not doable. He can't change the precedents and the statute on that," Schwartzman said.Schwartzman explained in a recent memo that "under the Communications Act, revocation of a license, which means taking it away in the middle of a license term, is essentially impossible. The legal standard is so high that the only time that the FCC tries to revoke a license is when a station (typically a mom-and-pop AM) goes dark." Schwartzman wrote the memo in response to Trump's demand that the FCC punish CBS.The FCC doesn't license TV networks such as CBS, NBC, or ABC, but the FCC could punish individual stations owned by those companies. The FCC's licensing authority is over broadcast stations, many of which are owned and operated by a big network. Other stations are affiliated with the networks but have different ownership.Although revoking a license in the middle of a license term is effectively impossible, the FCC can go after a license when it's up for renewal, Schwartzman said. But Carr will have to go through most of the next four years without any opportunity to challenge a broadcast TV license renewal. According to the FCC's list of renewal dates, there are no TV station licenses up for renewal until 2028.That won't give Carr enough time to reject a renewal and win in court, Schwartzman said. "A license renewal litigation that would take years can't even begin until Trump is out of office," he told Ars.Light years away from previous Republican chairCarr would face a high legal standard even if there were licenses up for renewal in 2025. Schwartzman's memo said that "the First Amendment bars denial of renewal based on program content, and certainly not based on the political views expressed.... The only way that a broadcaster could theoretically get into trouble on renewal would be a character problem based on being found to have lied to the government or conviction of major felonies."A license renewal isn't the FCC's only avenue for challenging broadcasters. As noted earlier in this article, Carr has discussed investigating bias allegations during proceedings on license transfers that happen in connection with mergers and acquisitions. Carr can "hold up a transfer" when a company tries to sell broadcast stations and "hassle people that way," Schwartzman told Ars.It's clear from his public statements that Carr sees the FCC's responsibility over broadcasters much differently than Pai, Trump's first FCC chair. Pai, a Republican who teamed up with Carr on deregulating the broadband industry and many other conservative priorities, rejected the idea of revoking broadcast licenses in 2017 despite Trump's complaints about news networks. Pai said that the FCC "under my leadership will stand for the First Amendment" and that "the FCC does not have the authority to revoke a license of a broadcast station based on the content of a particular newscast."More recently, Rosenworcel rejected Trump's call to revoke licenses from CBS. "As Ive said before, the First Amendment is a cornerstone of our democracy," she said in October this year. "The FCC does not and will not revoke licenses for broadcast stations simply because a political candidate disagrees with or dislikes content or coverage.On this topic, Carr's views are "light-years" away from Pai's, Schwartzman said. But Schwartzman also sees several of Carr's statements as being toothless. While Carr repeatedly points to the public interest standard for broadcasters, Schwartzman noted that the FCC must apply the public interest standard to all matters."All he's saying is, 'I'm going to enforce the statute as it's existed since 1934.' It's meaningless, and it's therefore easy for him to say," Schwartzman said.Carr was wrong about NBC violating the Equal Time Rule by putting Harris on Saturday Night Live, Schwartzman said. To comply with the rule, NBC only had to honor a request from Trump for "equal opportunities," he said. This is a routine process that broadcasters have known how to handle for a long time, he said."The burden is on the opposing candidate to ask for it. Having a candidate on... is not only not a violation, it's actually encouraged because broadcasters are supposed to stimulate discussion of issues and ideas," he said. Carr's main purpose in making his Saturday Night Live complaint, in Schwartzman's opinion, was "to fulminate. It's just grandstanding. He was running for chair."Conservative group urges limits on FCCJeffrey Westling, a lawyer who is the director of technology and innovation policy at the conservative American Action Forum, is concerned about the FCC acting on Trump's calls to punish networks. After Trump called for ABC licenses to be revoked because of its handling of a debate, Westling wrote that "it is indeed possible for the federal government to revoke a broadcast license, even in response to what is essentially a political offense."Westling urged Congress to "limit or revoke the FCC's authority to impose content-based restrictions on broadcast television," specifically through the FCC rule on broadcast news distortion.Proving distortion is difficult, with requires elements including "deliberate intent to distort the news" and "extrinsic evidence to the broadcast itself, such as that a reporter had received a bribe or that the report was instructed by management to distort the news," Westling wrote. The distortion also must be "initiated by the management of the station" and involve "a significant event.""While these standards are fairly stringent, the FCC must investigate complaints when a station seeks to renew its license, adding risk and uncertainty even if the station never truly violated the policy," Westling wrote.When contacted by Ars, Westling pointed out that the high standard for proving news distortion "only matter[s] if the administration's goal is to revoke a broadcaster's license. As much as I personally disagree with the rule, the courts have made clear that if a complaint has asserted the necessary elements, the Commission must thoroughly review it when considering a license transfer or renewal."The FCC "review is costly, and adds uncertainty for the broadcaster that quite literally relies on the license to operate," Westling said. "As a result, it is possible that even a threat from the president could influence how a broadcaster chooses to air the news, knowing that news distortion review could be in its future."Westling also said it's possible "that the FCC's use of the news distortion rule to deny a transfer or renewal of a license could be approved by the courts. The actual bounds of the rule are not well tested, and theoretically, a sympathetic court could be favorable to more loose enforcement of the rule."Carr, who described how he would run the FCC in a chapter for the conservative Heritage Foundation's Project 2025, also wants the agency to crack down on social media websites for alleged anti-conservative bias. He has said he wants to "smash" a "censorship cartel" that he claims includes social media platforms, government officials, advertising and marketing agencies, and fact-checkers.Other factors might stop Carrs blusterWhen it comes to broadcasting, Schwartzman said there are several reasons to think Carr's statements are mostly bluster that won't result in major consequences for TV stations.Broadcasters have a lot of political power that's wielded through the National Association of Broadcasters and relationships with members of Congress. Broadcasting, despite being less influential than it used to be, "is still among the most powerful industries in Congress and in the country... there is not a member of Congress alive who doesn't know the general manager of every TV station in their district," Schwartzman said.The FCC taking action against left-leaning broadcasters could lead to similar actions against conservative broadcasters during future administrations. Schwartzman questioned whether Carr actually wants "to set a precedent that's going to put Fox in jeopardy the next time there's a Democrat in the FCC."Another factor that could constrain Carr is how recent Supreme Court rulings limit the power of federal agencies. The FCC's other Republican member, Nathan Simington, has vowed to vote against any fine imposed by the commission until its legal powers are clear."Under new and controlling Supreme Court precedent, the Commission's authority to assess monetary forfeitures as it traditionally has done is unclear," Simington said in August. "Until the Commission formally determines the bounds of its enforcement authority under this new precedent, I am obligated to dissent from any decision purporting to impose a monetary forfeiture. I call on the Commission to open a Notice of Inquiry to determine the new constitutional contours of Commission enforcement authority."The Supreme Court's June 2024 ruling in Securities and Exchange Commission v. Jarkesy held that "when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial." This ruling could impact the ability of other agencies to issue fines.Besides all of those reasons, Schwartzman offered another potential problem for Carr's plansthe incoming chair's post-FCC employment prospects, particularly if Carr wants to go back to practicing law. Before becoming an FCC commissioner, Carr was the agency's general counsel."He's not going to have a career as a communications lawyer in private practice after he's on the FCC if he starts saying that broadcasters don't have First Amendment rights," Schwartzman said.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 95 Comments
    0 Comments 0 Shares 12 Views
  • WWW.STATESMAN.COM
    Tesla tops list of brands with highest fatal accident rate in new study
    NHTSA data shows five of the most fatal car brands. See where Tesla landsMarley MalenfantUSA TODAY NETWORKShow CaptionHide CaptionTesla Cybertruck modifications upgrade EV to a sci-fi police vehicleA host of mods for the electric Tesla Cybertruck has now been perfectly packaged for first responders and police departments.Thinking of purchasing a new car? Many consider the price, the color, the model and the gas mileage when looking for a vehicle. But safety should be the no. 1 priority.According to an analysis from iSeeCars, the current five most dangerous brands you could purchase today is the Dodge, Kia, Buick, Tesla and Buick.Despite their driver-assisted technology, Tesla's Model Y and Model S car are known to be dangerous on road, according to iSeeCars' data.Karl Brauer, iSeeCars Executive Analyst, said some of the blame falls on distracted drivers."New cars are safer than theyve ever been, he said. Between advanced chassis design, driver assist technology, and an array of airbags surrounding the driver, todays car models provide excellent occupant protection. But these safety features are being countered by distracted driving and higher rates of speed, leading to rising accident and death rates in recent years.Here is the top five list of most dangerous car brands.The five most dangerous car brandsAccording to their methodology, iSeeCars analyzed fatality data from the National Highway Traffic Safety Administration's Fatality Analysis Reporting System (FARS) of model year 2018-2022 cars with car crashes that resulted in at least one occupant fatality to identify the most dangerous vehicles on U.S. roads today.RankMakeFatal Accident Rate (Cars per Billion Vehicle Miles)1Tesla5.62Kia5.53Buick4.84Dodge4.45Hyundai3.9Tesla's past recalls, other mishapsThis year, Tesla had several recalls on their models for parts and labor issues.In May, Tesla recalled more than 125,000 vehicles over concerns of a malfunction with the vehicles' seat belt warning system could increase the chance of injury in a crash.The recall applied to:Model S cars made between 2012 and 2024Model X vehicles made between 2015 and 2024Model 3s made between 2017 and 20232020-2023 Model Y vehiclesThere were also recalls for nearly 4,000 of its Cybertrucks after regulators discovered the accelerator pedal could get stuck as a result of a manufacturing error. According to NHTSA's defect notice, the Tesla Cybertruck had an issue with its accelerator pedal, which could become stuck and cause unintended acceleration.
    0 Comments 0 Shares 10 Views
  • 0 Comments 0 Shares 9 Views
  • WWW.BBC.COM
    Two arrested after 'hazardous drone operation' near Boston airport
    Two arrested after 'hazardous drone operation' near Boston airportEPA-EFE/REX/ShutterstockFile photo of a plane taking off from Logan International Airport in BostonTwo people have been arrested after allegedly conducting a "hazardous drone operation" near the airspace of the US city of Boston's main airport, police said.Robert Duffy, 42, and Jeremy Folcik, 32, were arrested on Long Island, part of the Boston Harbor Islands, on Saturday night.They were charged with trespassing and police said they may face further counts and fines over the drones, which were "dangerously close" to Logan International Airport.Their arrests follow a series of drone sightings across the US north-east in recent weeks. Police have given no indication that the sightings are connected to these arrests.Police said the incident in Boston occurred at 16:30 local time (21:30 GMT) on Saturday when a police officer detected a drone operating "dangerously close" to Logan International Airport.Police said they identified the drone's location and tracked the operators' position to a decommissioned health campus on Long Island. Because of the drone's proximity to an airport, FBI counter-terrorism agents helped the investigation.What we know about the mysterious drones buzzing over New JerseyWhen officers arrived on the scene, police said three people attempted to flee, two of whom - Mr Duffy and Mr Folcik - were apprehended. A drone was discovered in a backpack carried by Mr Duffy, police said.The third suspect is believed to have fled the island in a small vessel and has not so far been found.It was not immediately clear whether Mr Duffy and Mr Folcik had legal representation. Police said they were yet to be arraigned. US government officials have been seeking to reassure residents of the north-east that no national or public security threats have been identified in the hundreds of drone sightings.Flying objects have been reported in states including New York, Pennsylvania, Maryland, Connecticut and Massachusetts, but most of the sightings have been in New Jersey.Homeland Security Secretary Alejandro Mayorkas told ABC News on Sunday the federal government was working in "close co-ordination" with state and local authorities on the issue.He said it was "critical" they be given the ability to counter drone activity under federal supervision.Mayorkas said the rise in drone sightings could be down to a change in federal law last year that allows the aircraft to be flown at night."That may be one of the reasons why now people are seeing more drones than they did before, especially from dawn to dusk," he said.He also said he knew of "no foreign involvement" in drone sightings around the US north-east.New York Governor Kathy Hochul has called for Congress to grant states more powers to tackle the drones, which forced runways at Stewart Airfield in the state to shut down for about an hour on Friday night. She said on Sunday that federal officials were sending a drone detection system to New York.Senate Majority Leader Chuck Schumer has requested that the technology also be sent to New Jersey.
    0 Comments 0 Shares 27 Views
  • WWW.CBSNEWS.COM
    AI was used to turn a teen's photo into a nude image. Now the teen is fighting for change to protect other kids.
    60 Minutes OvertimeAI was used to turn a teen's photo into a nude image. Now the teen is fighting for change to protect other kids.Fake nudes generated by AI causing real harm Francesca Mani was 14-years-old when her name was called over the loudspeaker at Westfield High School in New Jersey. She headed to the principal's office, where she learned that a picture of her had been turned into a nude image using artificial intelligence.Mani had never heard of a "nudify" website or app before. When she left the principal's office, she said that she saw a group of boys laughing at a group of girls that were crying."And that's when I realized I should stop crying and that I should be mad, because this is unacceptable," Mani said.What happened to Francesca ManiMani was sitting in her high school history class last October when she heard a rumor that some boys had naked photos of female classmates. She soon learned that she and several other girls at Westfield High School had been targeted.Schools face a new threat: "nudify" sites that use AI to create realistic, revealing images of classmatesAccording to a lawsuit later filed by one of the other victims through her parents, a boy at the school had uploaded photos from Instagram to a site called Clothoff, which is one of the most popular "nudify" websites. 60 Minutes has decided to name the site to raise awareness of its potential dangers. There were more than 3 million visits to Clothoff last month alone, according to Graphika, a company that analyzes social networks. The website offers to "nudify" both males and females, but female nudes are far more popular.Dorota Mani and Francesca Mani 60 Minutes Visitors to the website can upload a photo, or get a free demonstration, in which an image of a woman appears with clothes on, then appears naked just seconds later. The results look very real.Clothoff users are told they need to be 18 or older to enter the site and that they can't use other people's photos without permission. The website claims "processing of minors is impossible," but no one at the company responded when 60 Minutes emailed asking for evidence of that in addition to many other questions.Mani never saw what had been done to her photo, but according to that same lawsuit, at least one student's AI nude was shared on Snapchat and seen by several kids at school.The way Mani found out about her photo made it even worse, she said. She recalled how she and the other girls were called by name to the principal's office over the school's public address system."I feel like that was a major violation of our privacy while, like, the bad actors were taken out of their classes privately," she said.That afternoon, Westfield's principal sent an email to parents informing them "some of our students had used artificial intelligence to create pornographic images from original photos." The principal also said the school was investigating and "at this time we believe that any created images have been deleted and are not being circulated."Fake images, real harmMani's mother Dorota, who's also an educator, was not convinced. She worries that nothing that has been shared online is ever truly deleted."Who printed? Who screenshotted? Who downloaded? You can't really wipe it out," she said.The school district would not confirm any details about the photos, the students involved or disciplinary action to 60 Minutes. In a statement, the superintendent said the district revised its Harassment, Intimidation and Bullying policy to incorporate AI, something the Manis say they spent months urging school officials to do.Francesca Mani feels the girls who were targeted paid a bigger price than the boy or boys who created the images."Because they just have to live with knowing that maybe an image is floating, their image is floating around the internet," she said. "And they just have to deal with what the boys did."Dorota Mani said that she filed a police report, but no charges have been brought.Yiota Souras is chief legal officer at the National Center for Missing and Exploited Children. Her organization works with tech companies to flag inappropriate content on their sites. She says that while the images created on AI "nudify" sites are fake, the damage they can cause to victims is real."They'll suffer, you know, mental health distress and reputational harm," Souras said. "In a school setting it's really amplified, because one of their peers has created this imagery. So there's a loss of confidence. A loss of trust."Fighting for change60 Minutes found nearly 30 similar cases in schools in the U.S. over the last 20 months, along with additional cases around the world.In at least three of those cases, Snapchat was reportedly used to circulate AI nudes. One parent told 60 Minutes it took over eight months to get the accounts that had shared the images taken down. According to Souras, a lack of responsiveness to victims is a recurring problem the National Center for Missing and Exploited Children sees across tech companies."That isn't the way it should be. Right? I mean, a parent whose child has exploitative or child pornography images online should not have to rely on reaching out to a third party, and having them call the tech company. The tech company should be assuming responsibility immediately to remove that content," she said.Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, speaks with Anderson Cooper 60 Minutes 60 Minutes asked Snapchat about the parent who said the company didn't respond to her for eight months. A Snapchat spokesperson said they have been unable to locate her request and said, in part: "We have efficient mechanisms for reporting this kind of content." The spokesperson went on to say that Snapchat has "zero-tolerance policy for such content" and "...act[s] quickly to address it once reported."The Department of Justice says AI nudes of minors are illegal under federal child pornography laws if they depict what's defined as "sexually explicit conduct." But Souras is concerned some images created by "nudify" sites may not meet that definition.In the year since Francesca Mani found out she was targeted, she and her mom, Dorota, have encouraged schools to implement policies around AI. They've also worked with members of Congress to try and pass a number of federal bills. One of those bills, the Take It Down Act, which is co-sponsored by Senators Ted Cruz and Amy Klobuchar, made it through the Senate earlier this month and is now awaiting a vote in the House. It would create criminal penalties for sharing AI nudes and would require social media companies to take photos down within 48 hours of getting a request.More from CBS NewsAnderson CooperAnderson Cooper, anchor of CNN's "Anderson Cooper 360," has contributed to 60 Minutes since 2006. His exceptional reporting on big news events has earned Cooper a reputation as one of television's preeminent newsmen.
    0 Comments 0 Shares 27 Views
  • WWW.THEGUARDIAN.COM
    I received a first but it felt tainted and undeserved: inside the university AI cheating crisis
    The email arrived out of the blue: it was the university code of conduct team. Albert, a 19-year-old undergraduate English student, scanned the content, stunned. He had been accused of using artificial intelligence to complete a piece of assessed work. If he did not attend a hearing to address the claims made by his professor, or respond to the email, he would receive an automatic fail on the module. The problem was, he hadnt cheated.The Guardians journalism is independent. We will earn a commission if you buy something through an affiliate link.Learn more.Albert, who asked to remain anonymous, was distraught. It might not have been his best effort, but hed worked hard on the essay. He certainly didnt use AI to write it: And to be accused of it because of signpost phrases, such as in addition to and in contrast, felt very demeaning. The consequences of the accusation rattled around his mind if he failed this module, he might have to retake the entire year but having to defend himself cut deep. It felt like a slap in the face of my hard work for the entire module over one poorly written essay, he says. I had studied hard and was generally a straight-A student one bad essay suddenly meant I used AI?At the hearing, Albert took a seat in front of three members of staff two from his department and one who was there to observe. They told him the hearing was being recorded and asked for his name, student ID and course code. Then he was grilled for half an hour about his assignment. It had been months since hed submitted the essay and he felt conscious he couldnt answer the questions as confidently as hed like, but he tried his best. Had he, they asked, ever created an account with ChatGPT? How about Grammarly? Albert didnt feel able to defend himself until the end, by which point he was on the verge of tears. I even admitted to them that I knew the essay wasnt good, but I didnt use AI, he says.Four years have passed since Chat GPT-3 was released into the world. It has shaken industries from film to media to medicine, and education is no different. Created by San Francisco-based OpenAI, it makes it possible for almost anyone to produce passable written work in seconds based on a few basic inputs. Many such tools are now available, such as Googles Gemini, Microsoft Copilot, Claude and Perplexity. These large language models absorb and process vast datasets, much like a human brain, in order to generate new material. For students, its as close as you can get to a fairy godmother for a last-minute essay deadline. For educators, however, its a nightmare.More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat. In November, Times Higher Education reported that, despite patchy record keeping, cases appeared to be soaring at Russell Group universities, some of which had reported a 15-fold increase in cheating. But confusion over how these tools should be used if at all has sown suspicion in institutions designed to be built on trust. Some believe that AI stands to revolutionise how people learn for the better, like a 24/7 personal tutor Professor HAL, if you like. To others, it is an existential threat to the entire system of learning a plague upon education as one op-ed for Inside Higher Ed put it that stands to demolish the process of academic inquiry.In the struggle to stuff the genie back in the bottle, universities have become locked in an escalating technological arms race, even turning to AI themselves to try to catch misconduct. Tutors are turning on students, students on each other and hardworking learners are being caught by the flak. Its left many feeling pessimistic about the future of higher education. But is ChatGPT really the problem universities need to grapple with? Or is it something deeper?Turning the page: education has been shaken by the arrival of Chat GPT-3. Illustration: Carl Godfrey/The ObserverAlbert is not the only student to find himself wrongly accused of using AI. For many years, the main tool in the academys anti-cheating arsenal has been software, such as Turnitin, which scans submissions for signs of plagiarism. In 2023, Turnitin launched a new AI detection tool that assesses the proportion of the text that is likely to have been written by AI.Amid the rush to counteract a surge in AI-written assignments, it seemed like a magic bullet. Since then, Turnitin has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the firing line.There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a punch in the gut. Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are significant limitations to AI detection software. All the research says time and time again that these tools are unreliable, he told me. And they are very easily tricked. His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques such as minor manipulation to the text the accuracy dropped to just 22.1%.The students you end up catching are the ones who are most at risk of their academic careers being damaged anywayAs Perkins points out, those who do decide to cheat dont simply cut and paste text from ChatGPT, they edit it, or mould it into their own work. There are also AI humanisers, such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers. The only students who dont do that are really struggling, or they are not willing or able, to pay for the most advanced AI tools, like ChatGPT 4.0 or Gemini 1.5, says Perkins. And who you end up catching are the students who are most at risk of their academic careers being damaged anyway.If anyone knows what that feels like, its Emma. A year ago, she was expecting to receive the result of her coursework. Instead, an email pinged into her inbox informing her that she had scored a zero. Concerns over plagiarism, it read. Emma, a single parent studying for an arts degree, had been struggling that year. Studies, childcare, household chores she was also squeezing in time to apply for part-time jobs to keep herself financially afloat. Amid all this, with deadlines stacking up, shed been slowly lured in by the siren call of ChatGPT. At the time, she felt relief an assignment, complete. Now, she felt petrified.Emma, who also asked to remain anonymous, hadnt given generative AI much thought before she used it. She hadnt had time to. But there was a steady hum of chatter about it on her social media and when a bout of sickness led her to fall behind on her studies, and her mental capacity had run dry, she decided to take a closer look at what it could do. Logging on to ChatGPT, she could fast-track the last parts of the analysis, drop them into her essay and move on. I knew what I was doing was wrong, but that feeling was completely overpowered by exhaustion, she says. I had nothing left to give, but I had to submit a completed piece of work. When her tutor pulled up a report on their screen from Turnitin, showing an entire section had been flagged as having been written by AI, there was nothing Emma could think to do but confess.Her case was referred to a misconduct panel, but in the end she was lucky. Her mitigating circumstances seemed to be taken into account and, though it surprised her particularly since she had admitted to using ChatGPT the panel decided that the specific claim of plagiarism could not be substantiated.It was a relief, but mostly it was humiliating. I received a first for that year, says Emma, but it felt tainted and undeserved. The whole experience shook her her degree, and future had hung in the balance but she believes that universities could be more aware of the pressures that students are under, and better equip them to navigate these unfamiliar tools. There are many reasons why students use AI, she says. And I expect that some of them arent aware that the manner in which they utilise it is unacceptable.Cheating or not, an atmosphere of suspicion has cast a shadow over campuses. One student told me they had been pulled into a misconduct hearing despite having a low score on Turnitins AI detection tool after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do. Although he was eventually cleared, the experience messed with my mental health, he says. His confidence was severely knocked. I wasnt even using spellcheckers to help edit my work because I was so scared.Many academics seem to believe that you can always tell if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the universitys own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans.Students are also turning on each other. David, an undergraduate student who also requested to remain anonymous, was working on a group project when one of his course mates sent over a suspiciously polished piece of work. The student, David explained, struggled with his English, and thats not their fault, but the report was honestly the best Id ever seen.David ran the work through a couple of AI detectors that confirmed his suspicion, and he politely brought it up with the student. The student, of course, denied it. David didnt feel there was much more he could do, but he made sure to collect evidence of their chat messages. So, if our coursework gets flagged, then I can say I did check. I know people who have spent hours working on this and it only takes one to ruin the whole thing.David is by no means an AI naysayer. He has found it useful for revision, inputting study texts and asking ChatGPT to fire questions back for him to answer. But the endemic cheating all around him has been disheartening. Ive grown desensitised to it, he says. Half the students in my class are giving presentations that are clearly not their own work. If I was to react at every instance of AI being used, I would have gone crazy at this point. Ultimately, David believes the students are only cheating themselves, but sometimes he wonders how this erosion of integrity will affect his own academic and professional life down the line. What if Im doing an MA, or in a job, and everyone got there just by cheatingWhat counts as cheating is determined, ultimately, by institutions and examiners. Many universities are already adapting their approach to assessment, penning AI-positive policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an overview of new concepts, as a collaborative coach, or supporting time management. The university warns against over-reliance on these tools, which could limit a students ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didnt seem to be taking academic misconduct seriously any more; she had received a whispered warning that she was no longer to refer cases where AI was suspected to the central disciplinary board.They all agreed that a shift to different forms of teaching and assessment one-to-one tuition, viva voces and the like would make it far harder for students to use AI to do the heavy lifting. Thats how wed need to do it, if were serious about authentically assessing students and not just churning them through a 9,000-a-year course hoping they dont complain, one lecturer at a redbrick university told me. But that would mean hiring staff, or reducing student numbers. The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.If anything, the AI cheating crisis has exposed how transactional the process of gaining a degree has become. Higher education is increasingly marketised; universities are cash-strapped, chasing customers at the expense of quality learning. Students, meanwhile, are labouring under financial pressures of their own, painfully aware that secure graduate careers are increasingly scarce. Just as the rise of essay mills coincided with the rapid expansion of higher education in the noughties, ChatGPT has struck at a time when a degree feels more devalued than ever.The reasons why students cheat are complex. Studies have pointed to factors such as a pressure to perform, poor time management, or simply ignorance. It can also be fuelled by the culture at a university and cheating is certainly hastened when an institution is perceived to not be taking it seriously. But when it comes to tackling cheating, we often end up with the same answer: the staff-student relationship. This, wrote Dr Paula Miles in a recent paper on why students cheat, is vital, and it plays a powerful role in helping to reduce cases of academic misconduct. And right now, it seems that wherever human interactions are sparse, AI fills the gap.Albert had to wait nervously for two months before he found out, thankfully, that hed passed the module. It was a relief, though he couldnt find out if the essay in question had been marked down. By then, however, the damage had been done. He had already been feeling out of place at the university and was considering dropping out. The misconduct hearing tipped him into making a decision, and he decided to transfer to a different institution for his second year.The experience, in many ways, was emblematic of his time at the university, he says. He feels frustrated that his professor hadnt spoken to him initially about the essay, and disheartened that there were so few opportunities for students to reach out for help and support while he was studying. When it comes to AI, hes agnostic he reckons its OK to use it for studying and notes, as long as its not for submitted work. The bigger issue, he believes, is that higher education feels so impersonal. It would be better for universities to stop thinking of students as numbers and more as real people, he says.Some names have been changed
    0 Comments 0 Shares 27 Views
  • ABCNEWS.GO.COM
    Feds are urged to deploy high-tech drone hunters to solve mystery behind sightings
    Top New York political leaders are urging the federal government to deploy high-tech drone hunters to crack the mystery of who is behind the numerous sightings of what are believed to be unmanned flying objects that have been buzzing over communities in New York and New Jersey, and even prompting authorities to shut down an airport over the weekend.New York Sen. Chuck Schumer said Sunday that he's asking the U.S. Department of Homeland Security to immediately deploy special drone-detecting technology, which has been unclassified, to get to the bottom of what has been alarming and baffling residents in the region.New Jersey drone mystery: What to know and what can be done"If the technology exists for a drone to make it up into the sky, there certainly is the technology that can track the craft with precision and determine what the heck is going on," Schumer said during a news conference. "And that's what the Robin [radar system] does today.""We're asking the DHS, the Department of Homeland Security, to deploy special detection systems like the Robin, which use not a linear line of sight, but 360-degree technology that has a much better chance of detecting these drones. And we're asking DHS to bring them to the New York, New Jersey area," he said.This photo provided by Brian Glenn shows what appears to be multiple drones flying over Bernardsville, N.J., Dec. 5, 2024.Brian Glenn/TMX via AP, FILEHe said the technology was initially used to detect birds and prevent them from flying into airplane engines."Drone radar is based on the use of radio waves. The radio waves are sent out for the pulses, and that means it's detectable," Schumer said. "The question is, why haven't the federal authorities detected them yet?"Earlier Sunday, Homeland Security Secretary Alejandro Mayorkas in an interview on ABC's "This Week" the federal government is taking action to address the spate of drone sightings that have rattled the nerves of residents in New Jersey and New York."There's no question that people are seeing drones," Mayorkas told "This Week" anchor George Stephanopoulos. "I want to assure the American public that we in the federal government have deployed additional resources, personnel, technology to assist the New Jersey State Police in addressing the drone sightings."Mayorkas said some of the sightings are drones while others have been manned aircraft commonly mistaken for drones.This photo provided by Trisha Bushey shows the evening sky and points of light near in Lebanon Township, N.J., on Dec. 5, 2024.Trisha Bushey/AP"I want to assure the American public that we are on it," Mayorkas said, adding that he's calling on Congress to expand local and state authority to help address the issue.Numerous sightings of alleged dones have been reported along the East Coast since mid-November, most of them in New Jersey.Witnesses have described seeing drones the size of compact cars lighting up the night sky and hovering over homes. There have also been sightings of what appeared to be several large drones clustered together flying near military installations and President-elect Donald Trump's golf course in Bedminster, New Jersey.The Federal Aviation Administration has imposed drone flight restrictions while authorities investigate.'Multiple' drones entered airspace at New Jersey naval station: OfficialOfficials from several agencies on Saturday emphasized that the federal government's investigation into the drone sightings is ongoing. During a call with reporters, an FBI official said that of the nearly 5,000 tips the agency has received, less than 100 have generated credible leads for further investigation. A DHS official said they're "confident that many of the reported drone sightings are, in fact, manned aircraft being misidentified as drones."The FBI official also talked about how investigators overlayed the locations of the reported drone sightings and found that "the density of reported sightings matches the approach pattern" of the New York area's busy airports including Newark Liberty International Airport, John F. Kennedy International Airport and LaGuardia Airport.An FAA official said there have "without a doubt" been drones flying over New Jersey, pointing to the fact that there are nearly 1 million drones registered in the United States.MORE: Drones pose 'considerable danger,' will push for new legislation: NJ officialOfficials at Stewart International Airport in New Windsor, New York, about 60 miles north of New York City, said they were forced to close their runways for an hour on Friday night after the FAA alerted them of a drone spotted in the area.The Boston Police Department said Sunday that two men were arrested Saturday night after they allegedly flew a drone "dangerously close to Logan International Airport." A third suspect fled the scene in a boat and is being sought by police.MORE: Pentagon shoots down Iran 'mothership' claim amid New Jersey drone mysteryThe incident, according to police, began earlier Saturday afternoon when a Boston police officer specializing in real-time crime surveillance detected the drone operating near Logan International Airport. Using monitoring technology, the officer was able to locate the drone's altitude, flight history and the operator's position on Long Island in Boston Harbor, where police found the suspects in a decommissioned health campus, authorities said. The suspects ran, but police managed to chase down two of them and continued to search for the third suspect on Sunday.This image taken from video provided by MartyA45_, shows what appears to be several drones flying over Randolph, N.J., on Dec. 4, 2024.MartyA45_ /TMX via APThe suspects, identified by the Boston Police Department as 42-year-old Robert Duffy and 32-year-old Jeremy Folcik, both of Massachusetts, were arrested on trespassing charges.New York Gov. Kathy Hochul said Sunday that the federal government has agreed to deploy state-of-the-art drone detection systems to New York, but it was not immediately clear if she and Schumer were speaking about the same technology."In response to my calls for additional resources, our federal partners are deploying a state-of-the-art drone detection system to New York state," Hochul said. "This system will support state and federal law enforcement in their investigations. We are grateful to the Biden administration for their support, but ultimately we need further assistance from Congress."Hochul said she is pressing Congress to pass the Counter-UAS Authority Security, Safety, and Reauthorization Act, which will give "New York and our peers the authority and resources required to respond to circumstances like we face today."During a House Homeland Security joint subcommittee hearing on Tuesday, officials from the Department of Justice, the FBI and Customs and Border Protection told lawmakers that the current legal authorities are insufficient to deal with drones.Schumer said he would co-sponsor federal legislation to give the FAA and local agencies more oversight of drones and expand their methods of detection.Last week, Schumer, New York Sen. Kirsten Gillibrand, and New Jersey Sens. Cory Booker and Andy Kim sent a letter to the heads of the FBI, FAA and DHS requesting a briefing on the drone sightings."We write with urgent concern regarding the unmanned aerial system (UAS) activity that has affected communities across New York and New Jersey in recent days," the letter stated.ABC News' Michelle Stoddart contributed to this report.
    0 Comments 0 Shares 27 Views
  • WWW.TECHNOLOGYREVIEW.COM
    How Silicon Valley is disrupting democracy
    The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for the Economist, he warned of the coming techlash, a revolt against Silicon Valleys rich and powerful fueled by the publics growing realization that these sovereigns of cyberspace werent the benevolent bright-future bringers they claimed to be. While Wooldridge didnt say precisely when this techlash would arrive, its clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact happenand is arguably still happening. Say what you will about the legions of Elon Musk acolytes on X, but if an industry and its executives can bring together the likes of Elizabeth Warren and Lindsey Graham in shared condemnation, its definitely not winning many popularity contests. To be clear, there have always been critics of Silicon Valleys very real excesses and abuses. But for the better part of the last two decades, many of those voices of dissent were either written off as hopeless Luddites and haters of progress or drowned out by a louder and far more numerous group of techno-optimists. Today, those same critics (along with many new ones) have entered the fray once more, rearmed with popular Substacks, media columns, andincreasinglybook deals. Two of the more recent additions to the flourishing techlash genreRob Lalkas The Venture Alchemists: How Big Tech Turned Profits into Power and Marietje Schaakes The Tech Coup: How to Save Democracy from Silicon Valleyserve as excellent reminders of why it started in the first place. Together, the books chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back. Lalka is a business professor at Tulane University, and The Venture Alchemists focuses on how a small group of entrepreneurs managed to transmute a handful of novel ideas and big bets into unprecedented wealth and influence. While the names of these demigods of disruption will likely be familiar to anyone with an internet connection and a passing interest in Silicon Valley, Lalka also begins his book with a page featuring their nine (mostly) young, (mostly) smiling faces. There are photos of the famous founders Mark Zuckerberg, Larry Page, and Sergey Brin; the VC funders Keith Rabois, Peter Thiel, and David Sacks; and a more motley trio made up of the disgraced former Uber CEO Travis Kalanick, the ardent eugenicist and reputed father of Silicon Valley Bill Shockley (who, it should be noted, died in 1989), and a former VC and the future vice president of the United States, JD Vance. To his credit, Lalka takes this medley of tech titans and uses their origin stories and interrelationships to explain how the so-called Silicon Valley mindset (mind virus?) became not just a fixture in Californias Santa Clara County but also the preeminent way of thinking about success and innovation across America. This approach to doing business, usually cloaked in a barrage of cringey innovation-speakdisrupt or be disrupted, move fast and break things, better to ask for forgiveness than permissioncan often mask a darker, more authoritarian ethos, according to Lalka. One of the nine entrepreneurs in the book, Peter Thiel, has written that I no longer believe that freedom and democracy are compatible and that competition [in business] is for losers. Many of the others think that all technological progress is inherently good and should be pursued at any cost and for its own sake. A few also believe that privacy is an antiquated concepteven an illusionand that their companies should be free to hoard and profit off our personal data. Most of all, though, Lalka argues, these men believe that their newfound power should be unconstrained by governments, regulators, or anyone else who might have the gall to impose some limitations. Where exactly did these beliefs come from? Lalka points to people like the late free-market economist Milton Friedman, who famously asserted that a companys only social responsibility is to increase profits, as well as to Ayn Rand, the author, philosopher, and hero to misunderstood teenage boys everywhere who tried to turn selfishness into a virtue. The Venture Alchemists: How Big Tech Turned Profits into PowerRob LalkaCOLUMBIA BUSINESS SCHOOL PUBLISHING, 2024 Its a somewhat reductive and not altogether original explanation of Silicon Valleys libertarian inclinations. What ultimately matters, though, is that many of these values were subsequently encoded into the DNA of the companies these men founded and fundedcompanies that today shape how we communicate with one another, how we share and consume news, and even how we think about our place in the world. The Venture Alchemists is strongest when its describing the early-stage antics and on-campus controversies that shaped these young entrepreneurs or, in many cases, simply reveal who theyve always been. Lalka is a thorough and tenacious researcher, as the books 135 pages of endnotes suggest. And while nearly all these stories have been told before in other books and articles, he still manages to provide new perspectives and insights from sources like college newspapers and leaked documents. One thing the book is particularly effective at is deflating the myth that these entrepreneurs were somehow gifted seers of (and investors in) a future the rest of us simply couldnt comprehend or predict. Sure, someone like Thiel made what turned out to be a savvy investment in Facebook early on, but he also made some very costly mistakes with that stake. As Lalka points out, Thiels Founders Fund dumped tens of millions of shares shortly after Facebook went public, and Thiel himself went from owning 2.5% of the company in 2012 to 0.000004% less than a decade later (around the same time Facebook hit its trillion-dollar valuation). Throw in his objectively terrible wagers in 2008, 2009, and beyond, when he effectively shorted what turned out to be one of the longest bull markets in world history, and you get the impression hes less oracle and more ideologue who happened to take some big risks that paid off. One of Lalkas favorite mantras throughout The Venture Alchemists is that words matter. Indeed, he uses a lot of these entrepreneurs own words to expose their hypocrisy, bullying, juvenile contrarianism, casual racism, andyesoutright greed and self-interest. It is not a flattering picture, to say the least. Unfortunately, instead of simply letting those words and deeds speak for themselves, Lalka often feels the need to interject with his own, frequently enjoining readers against finger-pointing or judging these men too harshly even after hes chronicled their many transgressions. Whether this is done to try to convey some sense of objectivity or simply to remind readers that these entrepreneurs are complex and complicated men making difficult decisions, it doesnt work. At all. For one thing, Lalka clearly has his own strong opinions about the behavior of these entrepreneursopinions he doesnt try to disguise. At one point in the book he suggests that Kalanicks alpha-male, dominance-at-any-cost approach to running Uber is almost, but not quite like rape, which is maybe not the comparison youd make if you wanted to seem like an arbiter of impartiality. And if he truly wants readers to come to a different conclusion about these men, he certainly doesnt provide many reasons for doing so. Simply telling us to judge less, and discern more seems worse than a cop-out. It comes across as almost, but not quite like victim-blamingas if were somehow just as culpable as they are for using their platforms and buying into their self-mythologizing. In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be. Marietje Schaake Equally frustrating is the crescendo of empty platitudes that ends the book. The technologies of the future must be pursued thoughtfully, ethically, and cautiously, Lalka says after spending 313 pages showing readers how these entrepreneurs have willfully ignored all three adverbs. What theyve built instead are massive wealth-creation machines that divide, distract, and spy on us. Maybe its just me, but that kind of behavior seems ripe not only for judgment, but also for action. So what exactly do you do with a group of men seemingly incapable of serious self-reflectionmen who believe unequivocally in their own greatness and who are comfortable making decisions on behalf of hundreds of millions of people who did not elect them, and who do not necessarily share their values? You regulate them, of course. Or at least you regulate the companies they run and fund. In Marietje Schaakes The Tech Coup, readers are presented with a road map for how such regulation might take shape, along with an eye-opening account of just how much power has already been ceded to these corporations over the past 20 years. There are companies like NSO Group, whose powerful Pegasus spyware tool has been sold to autocrats, who have in turn used it to crack down on dissent and monitor their critics. Billionaires are now effectively making national security decisions on behalf of the United States and using their social media companies to push right-wing agitprop and conspiracy theories, as Musk does with his Starlink satellites and X. Ride-sharing companies use their own apps as propaganda tools and funnel hundreds of millions of dollars into ballot initiatives to undo laws they dont like. The list goes on and on. According to Schaake, this outsize and largely unaccountable power is changing the fundamental ways that democracy works in the United States. In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be: from dismissing government to literally taking on equivalent functions; from lauding freedom of speech to becoming curators and speech regulators; and from criticizing government overreach and abuse to accelerating it through spyware tools and opaque algorithms, she writes. Schaake, whos a former member of the European Parliament and the current international policy director at Stanford Universitys Cyber Policy Center, is in many ways the perfect chronicler of Big Techs power grab. Beyond her clear expertise in the realms of governance and technology, shes also Dutch, which makes her immune to the distinctly American disease that seems to equate extreme wealth, and the power that comes with it, with virtue and intelligence. This resistance to the various reality-distortion fields emanating from Silicon Valley plays a pivotal role in her ability to see through the many justifications and self-serving solutions that come from tech leaders themselves. Schaake understands, for instance, that when someone like OpenAIs Sam Altman gets in front of Congress and begs for AI regulation, what hes really doing is asking Congress to create a kind of regulatory moat between his company and any other startups that might threaten it, not acting out of some genuine desire for accountability or governmental guardrails. The Tech Coup:How to Save Democracyfrom Silicon ValleyMarietje SchaakePRINCETON UNIVERSITY PRESS, 2024 Like Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Schaake believes that the digital should live within democracys housethat is, technologies should be developed within the framework of democracy, not the other way around. To accomplish this realignment, she offers a range of solutions, from banning what she sees as clearly antidemocratic technologies (like face-recognition software and other spyware tools) to creating independent teams of expert advisors to members of Congress (who are often clearly out of their depth when attempting to understand technologies and business models). Predictably, all this renewed interest in regulation has inspired its own backlash in recent yearsa kind of tech revanchism, to borrow a phrase from the journalist James Hennessy. In addition to familiar attacks, such as trying to paint supporters of the techlash as somehow being antitechnology (theyre not), companies are also spending massive amounts of money to bolster their lobbying efforts. Some venture capitalists, like LinkedIn cofounder Reid Hoffman, who made big donations to the Kamala Harris presidential campaign, wanted to evict Federal Trade Commission chair Lina Khan, claiming that regulation is killing innovation (it isnt) and removing the incentives to start a company (its not). And then of course theres Musk, who now seems to be in a league of his own when it comes to how much influence he may exert over Donald Trump and the government that his companies have valuable contracts with. What all these claims of victimization and subsequent efforts to buy their way out of regulatory oversight miss is that theres actually a vast and fertile middle ground between simple techno-optimism and techno-skepticism. As the New Yorker contributor Cal Newport and others have noted, its entirely possible to support innovations that can significantly improve our lives without accepting that every popular invention is good or inevitable. Regulating Big Tech will be a crucial part of leveling the playing field and ensuring that the basic duties of a democracy can be fulfilled. But as both Lalka and Schaake suggest, another battle may prove even more difficult and contentious. This one involves undoing the flawed logic and cynical, self-serving philosophies that have led us to the point where we are now. What if we admitted that constant bacchanals of disruption are in fact not all that good for our planet or our brains? What if, instead of creative destruction, we started fetishizing stability, and in lieu of putting dents in the universe, we refocused our efforts on fixing whats already broken? What ifand hear me outwe admitted that technology might not be the solution to every problem we face as a society, and that while innovation and technological change can undoubtedly yield societal benefits, they dont have to be the only measures of economic success and quality of life? When ideas like these start to sound less like radical concepts and more like common sense, well know the techlash has finally achieved something truly revolutionary. Bryan Gardiner is a writer based in Oakland, California.
    0 Comments 0 Shares 28 Views
  • WWW.THEWRAP.COM
    OpenAI Whistleblower Suchir Balajis Death Ruled a Suicide
    Suchir Balaji, a 26-year-old former OpenAI researcher who backed claims of copyright infringement by the technology, was found dead on Nov. 26 in his San Francisco apartment by police making a wellness check. The news of his demise was not known until now, the Mercury News of San Jose, California, and other outlets reported. The San Francisco medical examiner has ruled that Balajis death was self-inflicted and there was no evidence of foul play, the Mercury News reported. Balaji publicly accused OpenAI of violating U.S. copyright law with its ChatGPT app. Balaji was the subject of an OctoberNew York Times profilethat unveiled his claims of fair use violations regularly committed by ChatGPT. The Times filed a letter on Nov. 18 in federal court that named Balaji as a person with unique and relevant documents that would be used in litigation against OpenAI. The lawsuit claims OpenAI and its partner, Microsoft, are using the work of reporters and editors without authorization.Balaji was a researcher for OpenAI for four years after joining in 2020.We are devastated to learn of this incredibly sad news today and our hearts go out to Suchirs loved ones during this difficult time, OpenAI said in astatement to CNBC.Balaji made a lengthy post to X in October detailing his concerns.
    0 Comments 0 Shares 32 Views
  • SD13.SENATE.CA.GOV
    Landmark Law Prohibits Health Insurance Companies from Using AI to Deny Healthcare Coverage
    Landmark Law Prohibits Health Insurance Companies from Using AI to Deny Healthcare CoverageDecember 9, 2024The Physicians Make Decisions Act Ensures Health Care Decisions Are Made by Medical Professionals, Not AlgorithmsSacramento, CAAs 2025 approaches, Californians can look forward to strengthened patient protections under the new Physicians Make Decisions Act (SB 1120), authored by Senator Josh Becker (D-Menlo Park). This groundbreaking law ensures that decisions about medical treatments are made by licensed health care providers, not solely determined by artificial intelligence (AI) algorithms used by health insurers.Artificial intelligence has immense potential to enhance healthcare delivery, but it should never replace the expertise and judgment of physicians, said Senator Becker. An algorithm cannot fully understand a patients unique medical history or needs, and its misuse can lead to devastating consequences. SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians access to the quality care they deserve.Ensuring Human Oversight in Healthcare DecisionsIn recent years, insurers have increasingly turned to AI to process claims and prior authorization requests. While these tools can improve efficiency, they also raise concerns about inaccuracies and bias in healthcare decision-making. Errors in algorithm-driven denials of care have, in some cases, resulted in severe health outcomes or even loss of life.Under SB 1120, any denial, delay, or modification of care based on medical necessity must be reviewed and decided by a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand. The law also establishes fair and equitable standards for companies using AI in their utilization review processes, preventing improper or unethical practices.California Leads the Nation in AI Regulation for HealthcareSponsored by the CMA, which represents 50,000 physicians statewide, SB 1120 sets a national precedent for ensuring AI in healthcare is used responsibly. This law reaffirms Californias commitment to equitable, patient-centered care while addressing legitimate concerns surrounding the expanding role of technology in healthcare.Other states are following California's lead in implementing laws to protect patients from utilizing AI to determine patient health care decisions.The Physicians Make Decisions Act will officially go into effect on January 1, 2025.
    0 Comments 0 Shares 28 Views
  • 0 Comments 0 Shares 29 Views
  • WWW.THEGUARDIAN.COM
    She didnt get an apartment because of an AI-generated score and sued to help others avoid the same fate
    Three hundred twenty-four. That was the score Mary Louis was given by an AI-powered tenant screening tool. The software, SafeRent, didnt explain in its 11-page report how the score was calculated or how it weighed various factors. It didnt say what the score actually signified. It just displayed Louiss number and determined it was too low. In a box next to the result, the report read: Score recommendation: DECLINE.Louis, who works as a security guard, had applied for an apartment in an eastern Massachusetts suburb. At the time she toured the unit, the management company said she shouldnt have a problem having her application accepted. Though she had a low credit score and some credit card debt, she had a stellar reference from her landlord of 17 years, who said she consistently paid her rent on time. She would also be using a voucher for low-income renters, guaranteeing the management company would receive at least some portion of the monthly rent in government payments. Her son, also named on the voucher, had a high credit score, indicating he could serve as a backstop against missed payments.But in May 2021, more than two months after she applied for the apartment, the management company emailed Louis to let her know that a computer program had rejected her application. She needed to have a score of at least 443 for her application to be accepted. There was no further explanation and no way to appeal the decision.Mary, we regret to inform you that the third party service we utilize to screen all prospective tenants has denied your tenancy, the email read. Unfortunately, the services SafeRent tenancy score was lower than is permissible under our tenancy standards.A tenant suesLouis was left to rent a more expensive apartment. Management there didnt score her algorithmically. But, she learned, her experience with SafeRent wasnt unique. She was one of a class of more than 400 Black and Hispanic tenants in Massachusetts who use housing vouchers and said their rental applications were rejected because of their SafeRent score.In 2022, they came together to sue the company under the Fair Housing Act, claiming SafeRent discriminated against them. Louis and the other named plaintiff, Monica Douglas, alleged the companys algorithm disproportionately scored Black and Hispanic renters who use housing vouchers lower than white applicants. They alleged the software inaccurately weighed irrelevant account information about whether theyd be good tenants credit scores, non-housing related debt but did not factor in that theyd be using a housing voucher. Studies have shown that Black and Hispanic rental applicants are more likely to have lower credit scores and use housing vouchers than white applicants.It was a waste of time waiting to get a decline, Louis said. I knew my credit wasnt good. But the AI doesnt know my behavior it knew I fell behind on paying my credit card but it didnt know I always pay my rent.Two years have passed since the group first sued SafeRent so long that Louis says she has moved on with her life and all but forgotten about the lawsuit, though she was one of only two named plaintiffs. But her actions may still protect other renters who make use of similar housing programs, known as Section 8 vouchers for their place in the US federal legal code, from losing out on housing because of an algorithmically determined score.SafeRent has settled with Louis and Douglas. In addition to making a $2.3m payment, the company has agreed to stop using a scoring system or make any kind of recommendation when it came to prospective tenants who used housing vouchers for five years. Though SafeRent legally admitted no wrongdoing, it is rare for a tech company to accept changes to its core products as part of a settlement; the more common result of such agreements would be a financial agreement.While SafeRent continues to believe the SRS Scores comply with all applicable laws, litigation is time-consuming and expensive, Yazmin Lopez, a spokesperson for the company, said in a statement. It became increasingly clear that defending the SRS Score in this case would divert time and resources SafeRent can better use to serve its core mission of giving housing providers the tools they need to screen applicants.Your new AI landlordTenant-screening systems like SafeRent are often used as a way to avoid engaging directly with applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Louis and the class of plaintiffs who sued the company.The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted.The AI doesnt know my behavior it knew I fell behind on paying my credit card but it didnt know I always pay my rentStill, even for people involved in the application process, the workings of the algorithm are opaque. The property manager who showed Louis the apartment said she couldnt see why Louis would have any problems renting the apartment.Theyre putting in a bunch of information and SafeRent is coming up with their own scoring system, Kaplan said. It makes it harder for people to predict how SafeRent is going to view them. Not just for the tenants who are applying, even the landlords dont know the ins and outs of SafeRent score.As part of Louiss settlement with SafeRent, which was approved on 20 November, the company can no longer use a scoring system or recommend whether to accept or decline a tenant if theyre using a housing voucher. If the company does come up with a new scoring system, it is obligated to have it independently validated by a third-party fair housing organization.Removing the thumbs-up, thumbs-down determination really allows the tenant to say: Im a great tenant, said Kaplan. It makes it a much more individualized determination.skip past newsletter promotionSign up to TechScapeFree weekly newsletterA weekly dive in to how technology is shaping our livesPrivacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotionAI spreads to foundational parts of lifeNearly all of the 92 million people who are considered low-income in the US have been exposed to AI decision-making in fundamental parts of life such as employment, housing, medicine, schooling or government assistance, according to a new report about the harms of AI by attorney Kevin de Liban, who represented low-income people as part of the Legal Aid Society. The founder of a new AI justice organization called TechTonic Justice, De Liban first started investigating these systems in 2016 when he was approached by patients with disabilities in Arkansas who suddenly stopped getting as many hours of state-funded in-home care because of automated decision-making that cut human input. In one instance, the states Medicaid dispensation relied on a program that determined a patient did not have any problems with his foot because it had been amputated.This made me realize we shouldnt defer to [AI systems] as a sort of supremely rational way of making decisions, De Liban said. He said these systems make various assumptions based on junk statistical science that produce what he refers to as absurdities.In 2018, after De Liban sued the Arkansas department of human services on behalf of these patients over the departments decision-making process, the state legislature ruled the agency could no longer automate the determination of patients allotments of in-home care. De Libans was an early victory in the fight against the harms caused by algorithmic decision-making, though its use nationwide persists in other arenas such as employment.Few regulations curb AIs proliferation despite flawsLaws limiting the use of AI, especially in making consequential decisions that can affect a persons quality of life, are few, as are avenues of accountability for people harmed by automated decisions.A survey conducted by Consumer Reports, released in July, found that a majority of Americans were uncomfortable about the use of AI and algorithmic decision-making technology around major life moments as it relates to housing, employment, and healthcare. Respondents said they were uneasy not knowing what information AI systems used to assess them.Unlike in Louiss case, people are often not notified when an algorithm is used to make a decision about their lives, making it difficult to appeal or challenge those decisions.The existing laws that we have can be useful, but theyre limited in what they can get you, De Liban said. The market forces dont work when it comes to poor people. All the incentive is in basically producing more bad technology, and theres no incentive for companies to produce low-income people good options.Federal regulators under Joe Biden have made several attempts to catch up with the quickly evolving AI industry. The president issued an executive order that included a framework intended, in part, to address national security and discrimination-related risks in AI systems. However, Donald Trump has made promises to undo that work and slash regulations, including Bidens executive order on AI.That may make lawsuits like Louiss a more important avenue for AI accountability than ever. Already, the lawsuit garnered the interest of the US Department of Justice and Department of Housing and Urban Development both of which handle discriminatory housing policies that affect protected classes.To the extent that this is a landmark case, it has a potential to provide a roadmap for how to look at these cases and encourage other challenges, Kaplan said.Still, keeping these companies accountable in the absence of regulation will be difficult, De Liban said. Lawsuits take time and money, and the companies may find a way to build workarounds or similar products for people not covered by class action lawsuits. You cant bring these types of cases every day, he said.
    0 Comments 0 Shares 29 Views
  • 0 Comments 0 Shares 29 Views
More Stories