• PlayStation Network is down
    www.theverge.com
    PlayStation Network (PSN) is experiencing some major problems as of Friday evening. According to Sonys PSN status page, account management, gaming and social, PlayStation Video, PlayStation Store, and the PlayStation Direct website are all dealing with issues. For gaming specifically, Sony says that you might have difficulty launching games, apps, or network features. A colleague wasnt able to load their purchased digital games or see their friends, trophies, or even their online status on their PS5. Sony is vowing to fix the problems as soon as possible.Sonys status website says the issues started at 7PM ET, but user reports started spiking on Downdetector at about an hour earlier at around 6PM ET. As I write this, there are less than 70,000 reports of problems on Downdetector, though the volume of reports seems to be dropping. An r/PlayStation thread about the outage has more than 1,600 comments.Sony didnt immediately reply to a request for comment.PSN had an outage that lasted for around eight hours back in October. It dealt with a partial outage in May, too.
    0 Reacties ·0 aandelen ·50 Views
  • Meta goes to war with leakers
    www.theverge.com
    During a Q&A with employees earlier this week, one of Metas top executives gave an ominous warning.After lamenting a tremendous number of leaks from inside the company, CTO Andrew Bosworth said that, while he didnt want to ruin the surprises, the company was making progress on catching people.Since Mark Zuckerbergs comments at a recent all-hands meeting were published, Metas leaders have tried to clamp down on an agitated and tense cohort of workers. The power struggle isnt over. Its unclear how Meta is going to look on the other side.Theres a funny thing thats happening with these leaks, Bosworth said during his Q&A earlier this week (a recording of which I obtained). When things leak, I think a lot of times people think, Ah, okay, this is leaked, therefore itll put pressure on us to change things. The opposite is more likely. This is a company that, from Marks inception, as far as I can tell, has always played the repeat game, he said. And if you create an environment where leaks cause people to make things change internally, then it creates more leaks. You create an incentive structure thats wrong I think thats a thing that we should change. I think we probably will at some point but I dont know when thats going to be.Like any company with tens of thousands of employees, Meta is not a monolith. There are still true believers and those who also do their best to just tune out the drama. But there are also reasons for concern: Employees are on edge about Zuckerbergs next round of low-performer layoffs that are arriving on Monday. And many are still reeling from his politically-charged changes to content moderation and DEI programs. The disconnect between that group of concerned employees and upper management was on full display during Bosworths Q&A earlier this week. At one point, he responded to a question from a female employee on Zuckerbergs widely-circulated comment to Joe Rogan about wanting more masculine energy in the workplace.The employee asked Bosworth to respond to Zuckerberg saying that companies need less feminine energy. Zuckerberg never technically said that in the interview with Rogan, though one could easily interpret the masculine energy soundbite to mean that the CEO wants a more specific kind of energy at the expense of the other. Bosworth went off.This really bothered me because its evident from your question that you did not actually listen to the thing you are criticizing or asking questions about. And that is not acceptable. I dont know why people think its an okay thing to take something laundered through a biased source construct and bring it here without even bothering to look at anything that is free and online.He wants us to be a place that supports all people, he continued. He says that in the interview and then you come in here and you havent even bothered to listen to the thing. Its like three minutes. It wouldnt take you that long. So, you can tell from your question [that] you didnt even read it. You didnt even listen to it. You just read something online and got real mad about it. Youve been mad about it for weeks, obviously. For weeks, youve been mad about it. Its crazy.ElsewhereDOGE inefficiency: As I first reported, the head of the White House division Elon Musk has technically taken over for DOGE resigned this week. Now, Im told that the roughly 200 technologists in what was called the United States Digital Service have no effective leader and arent working with Musks small bandit of young coders. Its not a very efficient setup considering these USDS folks were already working on modernizing the governments technology across agencies.Layoff watch: Sources tell me there were quiet layoffs in multiple areas of Google this week, including ads and YouTube operations Sonos laid off approximately 200 employees Metas five-percent low-performer cuts hit on Monday.Other headlines: This week in DOGE Google is done with DEI and also now okay having its AI used for weapons (and employees are strangely quiet)... Amazon is planning to announce its AI revamp of Alexa later this month X told investors that it owns 10-percent of xAI and 2024 EBITDA was $1.25 billion on revenue of $2.7 billion A Chinese ex-Google engineer was charged with espionage An ex-Apple employee publicly apologized for leaking.Job boardSome noteworthy job changes in the tech world:Ryan Cairns will become the leader of the Quest product line at Meta now that Mark Rabkin is leaving in March. Loredana Crisan and other leaders of Facebook Messenger are moving to the generative AI org now that Messenger is being folded into Tom Alisons bigger Facebook org. And another sign of the times: Henry Rodgers, chief national correspondent for The Daily Caller, is joining the companys policy team.OpenAI co-founder John Schulman is leaving Anthropic after just six months to join ex-CTO Mira Muratis new AI startup. I expect more OpenAI employees to announce that theyve joined Murati in due time.Ajit Mohan, Snaps head of Asia-Pacific, was promoted to chief business officer, overseeing all revenue teams globally.Robin Washington is the new chief operating and financial officer of Salesforce.Spencer Rascoff is the new CEO of Match Group.More linksThe most insane tech CEO convo Ive seen in awhile: Sam Altman and Masayoshi Son chat in Tokyo.Evan Spiegel went on The Colin & Samir Show.Andrej Karpathys new video explains LLMs for dummies.Elad Gil on why companies are leaving Delaware.Better late than never: Mistral launched a mobile app for its chatbot.Updated AI safety frameworks from Google DeepMind and Meta.If you havent already, dont forget to subscribe to The Verge, which includes unlimited access to Command Line, all of our reporting, and an improved ad experience on the web.As always, I want to hear from you, especially if you have thoughts about leaks. Respond here, and Ill get back to you, or ping me securely on Signal.Thanks for subscribing.See More:
    0 Reacties ·0 aandelen ·52 Views
  • The DeepSeek Effect Why Your Company Needs an AI Usage Policy and How to Create One
    towardsai.net
    LatestMachine LearningThe DeepSeek Effect Why Your Company Needs an AI Usage Policy and How to Create One 0 like February 7, 2025Share this postAuthor(s): Pawel Rzeszucinski, PhD Originally published on Towards AI. AI Usage Policy acts as a guiding force for safety and compliance in the turbulent digital world [Source: DALL-E]Whats to fear?The rise of accessible, powerful AI tools has been nothing short of revolutionary. From content creation to code generation, these technologies promise to supercharge productivity and unlock new levels of innovation. But with this incredible power comes significant responsibility, and frankly, a healthy dose of risk.Recent events have thrown this reality into sharp relief. The emergence of free and readily available AI models, like DeepSeek, presents a double-edged sword. While offering impressive capabilities, DeepSeeks aggressive user data collection practices [1], coupled with concerning security test failures [2], serve as a stark warning. Its no longer a question of if AI will impact your organization, but how safely it will be adopted (aka. how much damage it can do when handled irresponsibly).At WebPros, we recognized this shift early on. We understood that simply hoping employees would use AI responsibly wasnt a viable strategy. Compliance in the age of AI isnt just about ticking boxes; its about safeguarding your data, protecting your intellectual property, and ensuring responsible innovation. Dont treat this text as just another general data security concern piece; the business model behind many free and lower-tier AI toolsLets delve deeper into why simply hoping for responsible use is not just naive, but potentially disastrous, especially when considering the data-hungry nature of these readily available AI platforms.When I say Compliance in the age of AI isnt just about ticking boxes, its because the risks are far more nuanced and pervasive than traditional IT compliance. With those free AI tools, were not just talking about users accidentally downloading malware or sharing files on unsecured networks. We are facing a scenario where the core business model of many of these AI services directly relies on leveraging user inputs including your companys confidential data from training and improving their AI models, to extracting meaningful strategic insights on a large (national) scale.Think about it: these tools are often offered at little to no upfront cost precisely because the payment is in the data you provide. Your prompts, your inputs, the text you feed into them, the code you ask them to analyze all of this can be, and often is, explicitly used to refine and enhance the AI model itself and will in the future, more or less explicitly, be available to general public with new model releases. This isnt some hidden clause buried in legalese; its frequently stated quite clearly in their terms of service or privacy policies.Now, lets unpack why this is a huge threat to your companys data, intellectual property, and security compliance:Data leakage and intellectual property theft: Imagine an employee uses a free AI tool to summarize a sensitive internal document, refine a product strategy, debug proprietary code, or the most frequent scenario check (sensitive) email for style and grammar. By doing so, they are effectively feeding potentially confidential information directly into the AI providers system. This data becomes part of the providers training dataset. While they may anonymize it to some extent, the risk of sensitive concepts, unique approaches, and even identifiable snippets of code or text being incorporated into the model and potentially surfacing in responses to other users is very real. This constitutes a significant leak of intellectual property and competitive advantage.Erosion of confidentiality and trade secrets: Company data, especially trade secrets and confidential business information, are legally protected assets. By inputting this data into AI tools that use it for training, you are arguably breaking confidentiality agreements (both explicit and implicit) and jeopardizing the legal protection of your trade secrets. If a competitor later uses the same AI tool and happens to receive outputs that reflect your confidential information (even indirectly), it could lead to legal battles and significant financial repercussions.Compliance violations (GDPR, CCPA, etc.): I should probably start with this pointmany data privacy regulations, like GDPR and CCPA, mandate strict controls over how personal data is processed and shared. If your employees are inputting any form of personal data even indirectly or in pseudonymized forms into AI tools that use it for training, you could be in direct violation of these regulations. The legal and financial penalties for non-compliance can be severe, not to mention the reputational damage. And the AI Act is just around the cornerSecurity vulnerabilities: Beyond data leakage, the very act of sending company data to external, often less-scrutinized, free AI platforms can introduce security vulnerabilities. These platforms might have weaker security protocols than your internal systems, making your data more susceptible to breaches or cyberattacks further down the line.The points above are no joke. The introduction of AI policy is not about stifling innovation it is about creating a framework for safe and productive AI adoption.The AI usage policyIve never seen the interior of a lighthouse. Interesting [Source: DALL-E]Our policy centers around three core pillars:Pillar 1: A list of approved AI toolsThe first crucial step is acknowledging that not all AI tools are created equal. While the allure of the latest free AI offering is strong, its essential to approach adoption with careful consideration. Our policy begins by clearly defining a list of AI tools explicitly approved for use within the company.Why is this so important?Security and data privacy: As highlighted by the DeepSeek example, not all AI providers prioritize user data security and privacy. By vetting and approving tools, we can select platforms that meet our stringent security standards and comply with relevant data protection regulations (like GDPR, CCPA, etc.). The engineering and Data&AI units, together with our legal department, assessed factors like data encryption, data retention policies, and the providers security certifications.Functionality and business needs: Not every AI tool aligns with every business need. Our approved list ensures that employees are directed towards tools that are not only secure but also genuinely useful and relevant to their roles (e.g. GitHub Copilot only for software developers). This prevents the proliferation of shadow IT and ensures that AI adoption is driven by business value, not just novelty.Centralized management and support: By focusing on a defined set of tools, the organisation can effectively manage licenses, provide training, and address any technical issues that arise. This streamlined approach is far more efficient and secure than trying to manage a chaotic landscape of unvetted AI applications.AI tools are not different from other software tools, and should be treated in accordance with analogous software policies. You wouldnt allow employees to download and install any software they find on the internet without IT approval. This curated approach provides control and ensures that the tools being used are safe, reliable, and contribute to, rather than detract from, organizational goals.Pillar 2: Understanding what can and cannot be sharedPerhaps the most critical aspect of an AI usage policy is clarifying what data can and cannot be shared with approved AI tools. AI models learn from data, and the implications of sharing sensitive or confidential information are significant.Our policy explicitly explains the boundaries of data sharing, providing clear guidelines for employees. This, among other, includes:Personally Identifiable Information (PII): Under no circumstances should employees share customer PII, employee PII, or any other data that could be used to identify individuals with AI tools. This is paramount for regulatory compliance and maintaining customer trust.Confidential business information: Trade secrets, financial data, strategic plans, intellectual property, and any other confidential business information must be strictly protected and never shared with AI tools. This safeguards our competitive advantage and prevents potential data leaks to third-party platforms.Internal vs. external data: The policy differentiates between data that might be permissible to share (e.g., publicly available information, anonymized datasets) and data that is strictly off-limits. This provides nuanced guidance and avoids overly broad restrictions that could hinder legitimate AI use cases.Simply stating these rules isnt enough. We widely announced the details of the AI policy during townhalls, departmental all-hands, they sit in easily accesible Confluence spaces, together with a set of instructions on how to use them and where to look for additional training material.Pillar 3: A process for requesting new AI toolsInnovation is constant, especially in the rapidly evolving field of AI. Therefore, a static, inflexible policy is destined to become obsolete quickly. Our policy incorporates a clear and structured process for employees to request the adoption of new AI tools.This process is designed to be both accessible and rigorous, encompassing:Clear submission channel: Employees are provided with a straightforward process for submitting requests, through a designated online form. This ensures that requests are properly tracked and reviewed.Technical and legal review: Each request undergoes a thorough review by both technical and legal teams. The technical review assesses security aspects, data handling practices, and integration capabilities. The legal review examines compliance with relevant regulations, terms of service, and potential legal risks associated with the tool.Policy updates and communication: Approved new tools are added to the official approved list, and the policy is regularly updated and communicated to all employees. This keeps the policy relevant and ensures everyone is aware of the current guidelines.This process fosters so called controlled innovation. It empowers employees to suggest valuable new tools while ensuring that any adoption is carefully vetted and aligned with the organizations security and compliance requirements. Its about embracing progress responsibly, not fearing change.ConclusionsCan we guarantee 100% compliance? Realistically, probably not. Completely preventing the use of unapproved tools is likely impossible. However, that by no means diminish the immense value of having a robust AI Usage Policy.By providing safe and approved tools, clearly outlining data sharing guidelines, and establishing a transparent request process, we significantly reduce the risks associated with uncontrolled AI adoption. We empower our employees to use AI productively and responsibly while simultaneously protecting our organization from potential security breaches, legal liabilities, and reputational damage.In the age of free and powerful AI, complacency is no longer an option. Developing and implementing a comprehensive AI Usage Policy is not just a best practice; its becoming a business imperative. Its time for every organization to navigate this new frontier with caution, foresight, and a clear commitment to responsible AI innovation. The future of work is being shaped by AI, and ensuring that the future is secure and ethical starts with a solid policy today.References[1] https://www.cnbc.com/2025/02/02/why-deleting-chinas-deepseek-ai-may-be-next-for-millions-of-americans.html accessed on 05.02.2025[2] https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/ accessed on 05.02.2025Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Reacties ·0 aandelen ·50 Views
  • Insomniac Games Pitched Resistance 4, But Never Got It Approved
    www.ign.com
    There was apparently a real push to get Resistance 4 made by the folks at Insomniac Games, but unfortunately the game never received the green light.Insomniac Games founder and outgoing president Ted Price is sitting down for interviews now that its been announced hes retiring after 30 years at the helm of the studio. He most recently hung out with Kinda Funny Games for a chat and was asked if there was a favorite game of his that was pitched but never made, to which he gave his answer.Yeah, Ill share one. Resistance 4.PlayAccording to Price, he and the developers did pitch that one and it was a wonderful concept, and it just in terms of timing and market opportunity, didnt work out. He added that the Insomniac team were passionate about extending the story further because I do believe that Resistance has set up a really cool alternate history base where anything can happen with the Chimera and where they go and what their origins are.Resistance is a series of first-person shooters Insomniac developed following their work on the Ratchet and Clank games. Set in an alternate history where aliens invade the UK in 1951. There were three Resistance games made, all for the PlayStation 3 before Insomniac moved on to other projects like Marvels Spider-Man and new Ratchet and Clank games.Price announced earlier this year that he will be retiring from Insomniac Games after over 30 years at the company. He has named Chad Dezern, Ryan Schneider, and Jen Huang as Co-Studio Heads who will succeed him.Insomniac Games most recent title was Marvels Spider-Man 2 which just received a PC release, and their next game is set to be Marvels Wolverine.Matt Kim is IGN's Senior Features Editor.
    0 Reacties ·0 aandelen ·53 Views
  • Yes, PSN Is Down
    www.ign.com
    This is a helpful PSA from us to let you know that PSN is currently down.According to Downdetector, PSN has been suffering from an outage since at least 3pm PST/6pm EST. The PlayStation Network Service page also shows that all services, from sing-in to gaming to the PlayStation Store are all down. It's unclear when services will resume for PSN, but the outage means that this weekend begins without access to some of your favorite games including Marvel Rivals, Call of Duty, Fortnite, and more. We'll have an update for when services resume. In the meantime, no other platforms are reporting outages so it appears specific to PSN.Matt Kim is IGN's Senior Features Editor.
    0 Reacties ·0 aandelen ·51 Views
  • One of Elon Musk's DOGE Boys Was Fired by Previous Job for Leaking Company Secrets
    futurism.com
    A teenage member of billionaire Elon Musk's DOGE boys a youthful entourage currently slicing its way through the federal government was previously fired from an internship after being accused of leaking sensitive data to a competitor.The 19-year-old high school grad Edward Coristine, also known by the online moniker "Big Balls," was "terminated for leaking internal information to the competitors," according to a June 2022 message reviewed by Bloomberg.It's yet another sign that Musk did practically zero vetting while building out his A-team of young men who are now plundering a growing number of government agencies.Should we really let a teen, who was literally fired for leaking data, loose on huge swathes of highly sensitive government data, let alone without the required security clearances?The news comes after fellow DOGE lackey Marko Elez unexpectedly resignedon Thursday after theWall Street Journalrevealed a litany of extremely racist social media posts.However, it remains unclear whether Elez departed due to the racist messaging or for ransacking the US Treasury Department's payments system and pushingthrough code changes.In short, Musk's clown car of DOGE boys aren't just woefully underprepared, inexperienced, and unqualified for the job. Some of them also have histories of leaking company secrets and making racist comments and that's just what's been publicly revealed.As Wired previously reported, Coristine also ran a company called Tesla.Sexy LLC, which controls dozens of web domains, and offers AI chatbot services to the Russian market.But holding down an internship at the age of 17 seemingly proved too difficult."I can confirm that Edward Coristine's brief contract was terminated after the conclusion of an internal investigation into the leaking of proprietary company information that coincided with his tenure," a spokesperson for Path Network, where Coristine interned, told Bloomberg.Worse yet, Coristine bragged about sharing company secrets on Discord, writing that "I had access to every single machine" in late 2022, weeks after being fired, as quoted by Bloomberg.According to the report, he also posted a "mix of discussions about Path Network, coder-talk and lewd insults" on Telegram. Coristine also sought out a "capable, powerful & reliable L7," a type of DDoS cyberattack that aims to knock out websites by overwhelming them with internet traffic.Apart from leaking company secrets, Coristine also worked as a camp counselor and at the warehouse of his dad's popcorn company, Lesser Evil.The latest drama adds to the sense that DOGE is a massive cybersecurity disaster waiting to happen, and given Musk's historically terrible job of vetting people for the job, it's only a matter of time."Giving Elon Musk's goon squad access to systems that control payments to Social Security, Medicare, Medicaid and other key federal programs is a national security nightmare," senator Ron Wyden (D-OR) told Bloomberg."Every hour new disturbing details emerge to prove that these guys have no business anywhere close to sensitive information or critical networks," he added.Share This Article
    0 Reacties ·0 aandelen ·51 Views
  • Today's Wordle Hints, Answer and Help for Feb. 8, #1330
    www.cnet.com
    Looking for the most recent Wordle answer?Click here for today's Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.Today'sWordle puzzlehas pretty common letters (we have a list ranking all the letters in order of popularity), but take note that one of them is repeated. For hints and the answer, read on. Today's Wordle hints Before we show you today'sWordleanswer, we'll give you some hints. If you don't want a spoiler, look away now.Wordle hint No. 1: RepeatsToday's Wordle answer has one repeated letter.Wordle hint No. 2: VowelsThere is one vowel in today's Wordle answer, and it's the repeated letter, so you'll see it twice.Wordle hint No. 3: Start letterToday's Wordle answer begins with the letter SWordle hint No. 4: Drink upToday's Wordle answer can refer to soaking something in a liquid, like a tea bag in hot water.Wordle hint No. 5: Mountain climbingToday's Wordle answer can refer to a sharp slope or ascent.TODAY'S WORDLE ANSWERToday's Wordle answer is STEEP.Yesterday's Wordle answerYesterday's Wordle answer, Feb. 7, No. 1329, was SWATH.Recent Wordle answersFeb. 3, No. 1325: REVUEFeb. 4, No. 1326: TOOTHFeb. 5, No. 1327: PEDALFeb. 6, No. 1328: PUPIL
    0 Reacties ·0 aandelen ·49 Views
  • The PlayStation Network is down, and there's no sign on when it will come back online
    www.vg247.com
    You read that correctly. The PlayStation Network (PSN) is currently offline. That means online services for both the PlayStation 4 and PlayStation 5 are not currently working. Your gaming sesh will have to wait, I'm afraid. Read more
    0 Reacties ·0 aandelen ·55 Views
  • Nintendo Announces The First Tetris 99 Maximus Cup For 2025
    www.nintendolife.com
    It's Donkey Kong!Nintendo's first release of the year was Donkey Kong Country Returns HD, so it's only fitting to make the first Tetris 99 Maximus Cup of 2025 the same theme.Read the full article on nintendolife.com
    0 Reacties ·0 aandelen ·52 Views
  • Anduril in talks to raise up to $2.5B at $28B valuation
    techcrunch.com
    In BriefPosted:2:40 PM PST February 7, 2025Image Credits:PATRICK T. FALLON/AFP / Getty ImagesAnduril in talks to raise up to $2.5B at $28B valuationJust six months after defense tech Anduril raised a massive $1.5 billion round that valued the company at $14 billion, its in talks to raise another $2.5 billion, at a valuation of up to $28 billion, sources told CNBC.The deal would, not surprisingly, be led by Founders Fund, which is reportedly writing a $1 billion check that represents its largest check ever. While Anduril founder Palmer Luckey is the face of the company, it names five men as co-founders, among them Founders Fund partner Trae Stephens. Founders Fund has been an investor in Anduril from the start, leading its seed round in 2017. It also co-led Andurils 2024 financing.That previous round was to help pay for Andurils billion-dollar weapons megafactory it is building in Ohio. Sources also told CNBC that the companys 2024 revenue doubled to $1 billion. Anduril declined further comment.Topics
    0 Reacties ·0 aandelen ·50 Views