• Flood of violent and graphic content on Instagram; Meta says an error, not relaxed moderation
    9to5mac.com
    Instagram users saw a flood of violent and sexually explicit contents in their Reels before the company responded to complaints Parent company Meta says it has now fixed the issue, which is claims was due to a mistake rather than its new relaxed moderation policy.CNBC reports that it experienced the issue first-hand.On Wednesday night in the U.S., CNBC was able to view several posts on Instagram reels that appeared to show dead bodies, graphic injuries and violent assaults. The posts were labeled Sensitive Content []A number of Instagram users took to various social media platforms to voice concerns about a recent influx of violent and not safe for work content recommendations.Some users claimed they saw such content, even with Instagrams Sensitive Content Control enabled to its highest moderation setting.In an update to the report, it said Meta had apologized for the issue.We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake, a Meta spokesperson said in a statement shared with CNBC.Meta CEO Mark Zuckerberg last month said that the company was cutting back on automated checks on content, and would in future often only act after receiving complaints from users.Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldnt have been. So, were going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams. For less severe policy violations, were going to rely on someone reporting an issue before we take any action.We also demote too much content that our systems predict might violate our standards. We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest. And were going to tune our systems to require a much higher degree of confidence before a piece of content is taken down.Image: Meta/9to5MacAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comments ·0 Shares ·41 Views
  • Space Pirates Targets Russian IT Firms With New LuckyStrike Agent Malware
    thehackernews.com
    Feb 27, 2025Ravie LakshmananMalware / Network SecurityThe threat actor known as Space Pirates has been linked to a malicious campaign targeting Russian information technology (IT) organizations with a previously undocumented malware called LuckyStrike Agent.The activity was detected in November 2024 by Solar, the cybersecurity arm of Russian state-owned telecom company Rostelecom. It's tracking the activity under the name Erudite Mogwai.The attacks are also characterized by the use of other tools like Deed RAT, also called ShadowPad Light, and a customized version of proxy utility named Stowaway, which has been previously used by other China-linked hacking groups."Erudite Mogwai is one of the active APT groups specializing in the theft of confidential information and espionage," Solar researchers said. "Since at least 2017, the group has been attacking government agencies, IT departments of various organizations, as well as enterprises related to high-tech industries such as aerospace and electric power."The threat actor was first publicly documented by Positive Technologies in 2022, detailing its exclusive use of the Deed RAT malware. The group is believed to share tactical overlaps with another hacking group called Webworm. It's known to target organizations in Russia, Georgia, and Mongolia.In one of the attacks targeting a government sector customer, Solar said it discovered the attacker deploying various tools to facilitate reconnaissance, while also dropping LuckyStrike Agent, a multi-functional .NET backdoor that uses Microsoft OneDrive for command-and-control (C2)."The attackers gained access to the infrastructure by compromising a publicly accessible web service no later than March 2023, and then began looking for 'low-hanging fruit' in the infrastructure," Solar said. "Over the course of 19 months, the attackers slowly spread across the customer's systems until they reached the network segments connected to monitoring in November 2024."Also noteworthy is the use of a modified version of Stowaway to retain only its proxy functionality, alongside using LZ4 as a compression algorithm, incorporating XXTEA as an encryption algorithm, and adding support for the QUIC transport protocol."Erudite Mogwai began their journey in modifying this utility by cutting down the functionality they didn't need," Solar said. "They continued with minor edits, such as renaming functions and changing the sizes of structures (probably to knock down existing detection signatures). At the moment, the version of Stowaway used by this group can be called a full-fledged fork."Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Comments ·0 Shares ·38 Views
  • 89% of Enterprise GenAI Usage Is Invisible to Organizations Exposing Critical Security Risks, New Report Reveals
    thehackernews.com
    Feb 27, 2025The Hacker NewsArtificial Intelligence / Browser SecurityOrganizations are either already adopting GenAI solutions, evaluating strategies for integrating these tools into their business plans, or both. To drive informed decision-making and effective planning, the availability of hard data is essentialyet such data remains surprisingly scarce.The Enterprise GenAI Data Security Report 2025 by LayerX delivers unprecedented insights into the practical application of AI tools in the workplace, while highlighting critical vulnerabilities. Drawing on real-world telemetry from LayerXs enterprise clients, this report is one of the few reliable sources that details actual employee use of GenAI.For instance, it reveals that nearly 90% of enterprise AI usage occurs outside the visibility of IT, exposing organizations to significant risks such as data leakage and unauthorized access.Below we bring some of the reports key findings. Read the full report to refine and enhance your security strategies, leverage data-driven decision-making for risk management, and evangelize for resources to enhance GenAI data protection measures. To register to a webinar that will cover the key findings in this report, click here.Use of GenAI in the Enterprise is Casual at Most (for Now)While the GenAI hype may make it seem like the entire workforce has transitioned their office operations to GenAI, LayerX finds the actual use a tad more lukewarm. Approximately 15% of users access GenAI tools on a daily basis. This is not a percentage to be ignored, but it is not the majority.Yet. Here at The New Stack we concur with LayerXs analysis, predicting this trend will accelerate quickly. Especially since 50% of users currently use GenAI every other week.In addition, they find that 39% of regular GenAI tool users are software developers, meaning that the highest potential of data leakage through GenAI is of source and proprietary code, as well as the risk of using risky code in your codebase.How is GenAI Being Used? Who Knows?Since LayerX is situated in the browser, the tool has visibility into the use of Shadow SaaS. This means they can see employees using tools that were not approved by the organizations IT or through non-corporate accounts.And while GenAI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through their personal accounts. If employees do access through corporate accounts, only about 12% is done with SSO. As a result, nearly 90% of GenAI usage is invisible to the organization. This leaves organizations blind to shadow AI applications and the unsanctioned sharing of corporate information on AI tools.50% of Pasting Activity intoGenAI Includes Corporate DataRemember the Pareto principle? In this case, while not all users use GenAI on a daily basis, users who do paste into GenAI applications, do so frequently and of potentially confidential information.LayerX found that pasting of corporate data occurs almost 4 times a day, on average, among users who submit data to GenAI tools. This could include business information, customer data, financial plans, source code, etc.How to Plan for GenAI Usage: What Enterprises Must Do NowThe findings in the report signal an urgent need for new security strategies to manage GenAI risk. Traditional security tools fail to address the modern AI-driven workplace where applications are browser-based. They lack the ability to detect, control, and secure AI interactions at the sourcethe browser.Browser-based security provides visibility into access to AI SaaS applications, unknown AI applications beyond ChatGOT, AI-enabled browser extensions, and more. This visibility can be used to employ DLP solutions for GenAI, allowing enterprises to safely include GenAI in their plans, future-proofing their business.To access more data on how GenAI is being used, read the full report.Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Comments ·0 Shares ·42 Views
  • Will Enterprises Adopt DeepSeek?
    www.informationweek.com
    Lisa Morgan, Freelance WriterFebruary 27, 202511 Min ReadGK Images via Alamy StockDeepSeek recently bested OpenAI and other companies, including Amazon and Google, when it comes to LLM efficiency. Most notably, the R1 and V3 models are disrupting LLM economics.According to Mike Gualtieri, VP and principal analyst at Forrester, many enterprises have been using Meta Llama for an internal project, so theyre likely pleased that theres a high-performing model available that is open source and free.From a development and experimental standpoint, companies are going to be able to duplicate this exactly because they published the research on the optimization. It kind of triggers other companies to think, maybe in a different way, says Gualtieri. I dont think that DeepSeek is necessarily going to have a lock on the cost of training a model and where it can run. I think were going to see other AI models follow suit.DeepSeek has taken advantage of existing methods including:Distillation, which transfers knowledge from larger teacher models to smaller student models, reducing the size requiredFloating Point 8 (FP8), which minimizes compute resources and memory utilizationReinforcement learningSupervised fine tuning (SFT), which improves a pre-trained model's performance by training it on a labeled datasetAccording to Adnan Masood, chief AI architect at digital transformation services company UST, the techniques have been open sourced by US labs for years. Whats different is DeepSeeks very effective pipeline.Related:Adnan Masood, USTAdnan Masood, USTBefore, we had to just throw GPUs at problems, [which costs] millions and millions of dollars, but now we have this cost and this efficiency, says Masood. The training cost is under $6 million, which is completely challenging this whole assumption that you need a billion-dollar compute budget to build and train these models.Do Enterprises Want To Adopt It?In a word, yes, with a few caveats.Were already seeing adoption, though it varies based on an organizations AI maturity. AI-driven startups that Valdi and Storj engage with are integrating DeepSeek into their evaluation pipelines, experimenting with its architecture to assess performance gains, says Karl Mozurkewich, senior principal architect at Valdi.ai, a Storj company. More mature enterprises we work with are taking a different approach -- deploying private instances of DeepSeek to maintain data control while fine-tuning and running inference operations. Its open-source nature, performance efficiency and flexibility make it an attractive option for companies looking to optimize AI strategies.Related:And the economics are hard to ignore.DeepSeek is a game-changer for generative AI efficiency. [It] scores an 89 based on MMLU, GPQA, math and human evaluation tests -- the same as OpenAI o1-mini -- but for 85% lower cost per token of usage. The price-to-performance-quality ratio has been massively improved in GenAI due to DeepSeeks approach, says Mozurkewich. Right now, the market continues to be compute-constrained. Advances like DeepSeek will force many companies to have spare compute capacity to test [an] innovation when it is released. Most companies with AI strategies already have their committed GPU capacity fully utilized.Dan Yelle, chief data and analytics officer at small business lending company Credibly, says given that the AI landscape evolving at lightning speed, enterprises may hesitate to adopt DeepSeek over the medium term.[B]y prioritizing innovation over immediate large-scale profits, DeepSeek may force other AI leaders to accept lower margins and to turn their focus to improving efficiency in model training and execution in order to remain competitive, says Yelle. As these pressures reshape the AI market, and it reaches a new equilibrium, I think performance differentiation will again become a bigger factor in which models an enterprise will adopt.Related:He also says differentiation may increasingly be based on factors beyond standard benchmark metrics, however.It could become more about identifying models that excel in specialized tasks that an enterprise cares about, or about platforms that most effectively enable fine-tuning with proprietary data, says Yelle. This shift towards task specificity and customization will likely redefine how enterprises choose their AI models.But the excitement should be tempered with caution.Large language models (LLMs) like ChatGPT and DeepSeek-V3 do a number of things, many of which may not be applicable to enterprise environments, yet. While DeepSeek is currently driving conversation given its ties to China, at this stage, the question is less about whether DeepSeek is the right product, but rather is AI a beneficial capability to leverage given the risks it may carry, says Nathan Fisher, managing director at global professional services firm StoneTurn and former special agent with the FBI. There is concern in this space regarding privacy, data security, and copyright issues. Its likely many organizations would implement AI technology, especially LLMs, where it might serve to enhance efficiency, security, and quality. However, it is reasonable most will not fully commit or implement until some of these issues are decided.Be Aware of RisksLower cost and higher efficiency need to be weighed against potential security and compliance issues.The CIOs and leaders Ive talked to have been contemplating how to balance the temptation of a cheaper, high performing AI versus the potential security and compliance tradeoff. This is a risk-benefit calculation, says USTs Masood. [Theyre] also debating about backdooring the model [where] you have a secret trigger which causes malicious activity, like [outputting] sensitive data, or [executing] unauthorized actions. These are well known attacks on large language models.Unlike working with Azure or AWS that provide regulatory compliance, DeepSeek does not have the same guarantees. And the implementation matters. For example, one could use a hosted model and APIs or self-host. Masood recommends the latter.[T]he biggest benefit you have with a self-hosted model is that you don't have to rely on the third party, says Masood. So, the first thing, if it's hosted in an adversarial environment, and you try to run it, then essentially, you're copying and pasting into that model, it's all going on somebody else's server, and this applies to any LLM you're using in the cloud. Are they going to keep your data and prompt and use it to train their models? Are they going to use it for some adversarial perspective? We don't know.In a self-hosted environment, enterprises have the benefits of continuous logging and monitoring, and the concept of least privilege. Its less risky because PII stays on premises.If you allow limited usage within the company, then you must have security and monitoring in place, like access control, blocking, and sandboxing for the public DeepSeek interface, says Masood. If its a private DeepSeek interface, then you sandbox the model and make sure that you log all the queries, and everything gets monitored in that case. And I think the biggest challenge is bias oversight. Every model has built-in bias based on the training data, so it becomes another element in corporate policy to ensure that none of those biases seep into your downstream use cases.Security firm Qualsys recently published DeepSeek R-1 testing results, and there were more test failures than successes. The KB Analysis prompted the target LLM with questions across 16 categories and evaluates the responses. Those responses were assessed for vulnerabilities, ethical concerns, and legal risks.Qualsys also conducted jailbreak testing, which bypasses built-in safety mechanisms to identify vulnerabilities. In the report, Qualsys notes, These vulnerabilities can result in harmful outputs, including instructions for illegal activities, misinformation, privacy violations, and unethical content. Successful jailbreaks expose weaknesses in AI alignment and present serious security risks, particularly in enterprise and regulatory settings. The test involved 885 attacks using 18 jailbreak types. It failed 58% of the attacks, demonstrating significant susceptibility to adversarial manipulation.Amiram Shachar, co-founder and CEO of cloud security company Upwind, doesnt expect significant enterprise adoption, largely because DeepSeek is a Chinese company with direct access to a vast trove of user data. He also believes shadow IT will likely surge as employees use it without approval.Organizations must enforce strong device management policies to limit unauthorized app usage on both corporate and personal devices with sensitive data access. Otherwise, employees may unknowingly expose critical information through interactions with foreign-operated AI tools like DeepSeek, says Shachar. To protect their systems, enterprises should prioritize AI vendors that demonstrate strong data protection protocols, regulatory compliance, and the ability to prevent data leaks, like AWS with their Bedrock service. At the same time, they must build governance frameworks around AI use, balancing security and innovation. Employees need education on the risks associated with shadow IT, especially when foreign platforms are involved.Dan Lohrmann, field CISO at digital services and solutions provider Presidio, says enterprises will not adopt DeepSeek, because their data is stored in China. In addition, some governments and defense organizations have already banned DeepSeek use, and more will follow.I recommend that enterprises proceed with caution on DeepSeek. Any research or formally sanctioned testing should be done on separate networks that are built upon secure processes and procedures, says Lohrmann. Exceptions may include research organizations, such as universities, or others who are experimenting with new AI options with non-sensitive data.For enterprises, Lohrmann believes DeepSeek is a large risk.There are functional risks, operational risks, legal risks, and resource risks to companies and governments. Lawmakers will largely treat this situation [like] TikTok and other apps that house their data in China, says Lohrmann. However, staff are looking for innovative solutions, so if you dont offer GenAI alternatives that work well and keep the data secure, they will go elsewhere and take matters into their own hands. Bottom line, if you are going to say no to DeepSeek, youd better offer a yes to workable alternatives that are secure.Sumit Johar, CIO financial automation software company BlackLine, says at a minimum, enterprises must have visibility into how their employees are using the publicly available AI models and if they are sharing sensitive data with these models.Once they see the trend among employees, they may want to put additional controls to allow or block certain AI models in line with their AI strategy, says Johar. Many organizations have deployed their own chat-based AI agents for employees, that are deployed internally and substitute for the publicly available models. The key is to make sure they are not blocking the learning for their employees but helping them avoid mistakes that can cost enterprises in the long term.Unprecedented volatility in the AI space has already convinced enterprises that their AI strategy shouldnt rely on only one provider.Theyll expect solution providers to provide the flexibility to pick and choose the AI models of their choice in a way that doesnt require intrusive changes to the basic design, says Johar. It also means that the risk of rogue or unsanctioned AI use will continue to rise, and they need to be more vigilant about the risk.Proceed With Caution at a MinimumStoneTurns Fisher says there are two aspects to consider in terms of policy. First, are AI technology and LLMs generally appropriate for the individual company, its operations, its industry, etc?Based on this, companies need to monitor for and/or restrict employee usage if it is determined to be inappropriate for work product.Second, is the use of DeepSeek-V3 specifically approved for use on company devices?Nathan Fisher, StoneTurnNathan Fisher, StoneTurnAs a practitioner of national security and cybersecurity investigations, I would cautiously suggest it is premature to allow for the use of DeepSeek-V3 on company devices and would recommend establishing policy prohibiting such until the actual and potential security risks of DeepSeek-V3 can be further independently investigated and reviewed, says Fisher.While it is short sighted and overly alarmist to prescribe that all China-produced tech products should be categorically off the table, Fisher says there is enough precedent to justify the need for due diligence review and scrutiny of engineering before something like DeepSeek is approved and adopted by US companies.Its [fair] to suspect, lacking further analysis, that DeepSeek-V3 may be capable of collecting all manner of data that may make companies, customers, and shareholders very uncomfortable, and perhaps vulnerable to third parties seeking to disrupt their business.Reporting around DeepSeeks security flaws over recent weeks are enough to raise alarm bells for organizations that may be considering what AI platform best fits their needs.There are proposals in motion in the US government to ban DeepSeek from government-owned devices. Globally, there are already bans in place in certain jurisdictions regarding DeepSeek-V3s use. As it related to AI more broadly, Fisher says lawmakers need to first solve the questions around data privacy and copyright infringement concerns. The US government needs to make determinations on what, if any, regulation will be applied to AI. Those issues surpass questions about DeepSeek specifically and will have much greater overall impact in this space.Stay informed. Pay close attention to developments in terms of regulation and privacy considerations. Big issues need to be addressed, and so far,the technology is advancing and being adopted much faster and more broadly than these concerns have been addressed or resolved, says Fisher. Proceed with caution in adopting emerging technology without significant internal review and discussion. Understand your business, what laws and regulations may be applied to your use of this technology, and what technical risk these tools may invite into your network environments if not properly vetted.And finally, a recent Gartner research note sums up guidance: Dont overreact, and reassess DeepSeeks achievement with caution.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comments ·0 Shares ·38 Views
  • Risk Management for the IT Supply Chain
    www.informationweek.com
    One positive development from the COVID-19 pandemic was that it forced companies to take hard looks at external supply chains to ensure they were reliable, secure and trustworthy, and that should one vendor fail, another could step in.There were numerous supply chain misfires during the pandemic, and companies and consumers suffered and learned from the experience.That brings us to IT.The IT supply chain comes with its own set of risks, but it faces the same vulnerabilities corporate production supply chains encounter. One key difference is that organizations don't regularly focus on those IT supply chains. While IT departments have active disaster recovery and failover plans, there are few that regularly vet vendors, or that audit their tech supply chains for resiliency.Moodys tells us, Disruption in one part of the supply chain can have significant ripple effects, impacting businesses and economies across sectors and regions, and the IT supply chain is no exception when it comes to risk.I have seen these things firsthand:A trustworthy vendor gets acquired by another vendor that IT has had poor past experience within the past. How easy is it to migrate to another new vendor?A company suddenly and unexpectedly sunsets its technology and with it, the tech support. Can IT find a third party that will step in to support the old tech if the IT department had relied on the original vendor for its know-how, and doesnt have the budget to move to another tech option?Related:There is a component shortage at the vendor, so IT is unable to upgrade routers on its network. Is there an alternative vendor?IT has contracted with a service company to provide technical and user support for a multi-national application, but now the provider ceases operations in one of the countries where the company has a facility. What do you do now?All are real-world examples that Ive personally seen. They call into question the IT supply chains resiliency. When these incidents occurred, there was no ready route for IT to cure a supply chain conundrum, and the IT departments involved found themselves in difficult positions, having to tough it out with unsupported technologies, or pause certain technologies, and/or create workarounds for processes that no longer functioned.No one likes to be in that position. So, are there tried and true supply chain methodologies that can be applied to the IT supply chain, too?Yes, there are proven supply chain strategies and methods out there. Here are four of them:Related:Assess your supply chain.Who are your mission critical vendors? Do they present significant risks (for example, risk of a merger, or going out of business)? Where are your IT supply chain weak links (such as vendors whose products and services repeatedly fail). Are they impairing your ability to provide top-grade IT to the business?What countries do you operate in? Are there technology and support issues that could emerge in those locations? Do you annually send questionnaires to vendors that query them so you can ascertain that they are strong, reliable and trustworthy suppliers? Do you request your auditors periodically review IT supply chain vendors for resiliency, compliance and security?Those are a few questions that IT departments should ask when reviewing tech supply chains, but when I mention these to IT leaders, few tell me that they do them.Mitigate the supply chains weak links.If you have a mission-critical supplier and you find there are no alternative suppliers, youre exposed to risk if that supplier gets acquired, goes out of business, or has a component shortfall and cant deliver.For any mission-critical sole source supplier, its incumbent on IT to locate alternate suppliers that can step in, and to be ready to use them if an emergency warrants it.Related:One key area is internet service providers (ISPs). Companies should always have more than one ISP so Internet service will remain uninterrupted.Audit your suppliers.Most enterprises include security and compliance checkpoints on their initial dealings with vendors, but few check back with the vendors on a regular basis after the contracts are signed.Security and governance guidelines change from year to year. Have your IT vendors kept up? When was the last time you requested their latest security and governance audit reports from them?Verifying that vendors stay in step with your companys security and governance requirements should be done annually.Include the IT supply chain in the corporate risk management plan.Although companies include their production supply chains in their corporate risk management plans, they dont consistently consider the IT supply chain and its risks.Todays digital companies wont function if the IT isnt working, so CIOs must push for the IT supply chain to be part of overall corporate risk management if it isnt already.
    0 Comments ·0 Shares ·40 Views
  • Gene Hackman, Oscar-Winning Movie Star, Dies at 95
    screencrush.com
    One of the most versatile and beloved actors in Hollywood history has died. Gene Hackman, who played parts ranging from comedy to drama, from character work to leading men, and won two Oscars during his long and acclaimed career, was found dead on Wednesday. He was 95 years old.According toVariety, Hackman, his wife Betsy Arakawa, and their dog, were all discovered dead on Wednesday at the familys home in New Mexico. Their report claims there is no immediate indication of foul play, per authorities, though the Sheriffs office did not immediately provide a cause of death.Born in California in 1930, Hackman enlisted in the Marines when he was still a teenager, then studied television production and journalism via the G.I. Bill at the University of Illinois. By the mid-1950s he was an aspiring actor, one of a whole generation of up-and-coming talents who began making waves in the New York theater scene in the 1960s. He did some work in television (and at one point he was up for the role of Mike Brady onThe Brady Bunch), and then got his breakthrough and his first of five Academy Award nomination as the older brother of Warren Beattys Clyde in the watershed 1967 filmBonnie and Clyde.READ MORE: Actors Who Criticized Their Own TV ShowsBy the start of the 1970s, Hackman was one of the most respected and popular actors in Hollywood. He won his first Oscar for 1971sThe French Connection, another hugely popular and influential movie. (It also won the Oscar for Best Picture that year.) In the film, directed by William Friedkin, Hackman played tough New York cop Jimmy Popeye Doyle. Hackman later reprised the role in a sequel, 1975sFrench Connection II.AfterThe French Connection, Hackman was a fixture in movie theaters until his retirement from acting in 2004. The list of Hackmans famous work reads like a list of the best and biggest movies of the last few decades of the 20th century:The Conversation,The Poseidon Adventure,Superman: The Movie,Superman II,Hoosiers,Mississippi Burning, The Firm, Get Shorty,Crimson Tide,The Birdcage,The Royal Tenenbaums,and on and on.Hackman won his second Oscar in 1992, this time as a supporting actor, for his role as Little Bill Daggett in Clint EastwoodsUnforgiven.Hackman formally retired in 2004, following his role in the comedyWelcome to Mooseport.He spent the next 20 years enjoying retirement and writing Hackman became an author in 1999 and then wrote several more novels over the next 15 years. Although he narrated several documentaries, he never acted in a fiction film again afterWelcome to Mooseport.Despite two decades of retirement, the strength of Hackmans body of work meant that people continued to rememberand talk about him and to hope he might come out of retirement for one final role right up until the day he died. Now that hes gone, I suspectHackmans reputation will only grow, if its even possible for someone whos already considered one of the greatest actors of his generation to improve their reputation from there.Get our free mobile appMovies We Love Because Theyre Always On CableThere's something fun about surfing through TV channels later at night and happening upon something youve never seen before.Gallery Credit: Emma Stefansky
    0 Comments ·0 Shares ·46 Views
  • AutoGPT: Senior Frontend Engineer at AutoGPT
    weworkremotely.com
    About AutoGPT At AutoGPT, we are on a mission to democratize AI by providing powerful digital assistants (agents) to everyone. Our platform is designed to empower businesses, level the playing field, and drive innovation with autonomous AI. We are building an open-source, AI-driven ecosystem that simplifies complex tasks and workflows. With our global community, we're working to make AI accessible and impactful for all users, regardless of their technical background.Our Product AutoGPT Marketplace: A marketplace where users can explore and build innovative AI agents.AutoGPT Builder: A graph-based, infinite canvas front-end that allows users to create agents using nodes, designed for non-technical users.AutoGPT Server: The infrastructure that powers AI assistants, automating tasks reliably and efficiently.AutoGPT Agent Library: A dashboard for managing and monitoring the performance of AI agents, giving users control and transparency. </aside>The Role As a Contract Frontend Engineer, you will play a pivotal role in building the user-facing portions of our AI Agent platform. This includes developing dynamic, responsive, and intuitive interfaces using the latest frontend technologies. Youll be working alongside a talented team of engineers, designer, and product manager to create a seamless experience for both technical and non-technical users.What You'll Achieve:Develop and maintain frontend features for AutoGPTs marketplace, agent builder, and agent library using Next.js, React, and Tailwind CSS.Collaborate closely with the design team to integrate Figma designs into functional, user-friendly components, ensuring a refined user interface and smooth web animations (e.g., confetti for onboarding screens, or transitions) to enhance the product's polish.Enhance user experience by optimizing the performance, accessibility, and scalability of the platform.Work on the integration of backend systems, familiar with Python, Prisma, and Supabase, to ensure smooth interactions between the frontend and backend.Contribute to building and maintaining a design system to ensure consistency and high quality across all UI components.Help make AI programming and agent management accessible to a wider audience by creating intuitive, visually appealing interfaces for non-technical users.Skills You'll Need:Proficiency with Next.js and React for building dynamic web applications.Experience with Tailwind CSS to create responsive and reusable UI components.Familiarity with Python, Prisma, and Supabase for backend integrations.Experience working with Figma to collaborate on designs and implement them in code.Strong understanding of web performance optimization, accessibility standards, and best practices.A collaborative approach to problem-solving, working with cross-functional teams in a remote-first environment.Nice to Haves:A passion for AI and its potential to change the world.Polished design sensibility: an understanding of design polish to ensure a refined, user-friendly interface.Experience with shadcn or similar design systems.Previous work on open-source projects or in start-up environments.Why Join Us:Be part of a mission-driven company focused on democratizing AI and empowering businesses.We value empathy for our users, colleagues, and ourselves to build a supportive and sustainable work environment.Work on an exciting, innovative product that is shaping the future of AI accessibility.Join a highly collaborative and globally distributed team.Contribute to an open-source project with immediate user feedback and rapid iteration.Details:Role: Contract (with potential for long-term collaboration)Remote: Fully remote and asynchronousStart Date: ImmediateCompensation: Competitive, based on experienceApply NowLet's start your dream job Apply now
    0 Comments ·0 Shares ·33 Views
  • The Download: Amazons quantum chip, and preventing battery fires
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology.Amazons first quantum computing chip makes its debut The news: Amazon Web Services has announced Ocelot, its first-generation quantum computing chip. While the chip has only rudimentary computing capability, the company says it is a proof-of-principle demonstrationa step on the path to creating a larger machine that can deliver on the industrys promised killer applications, such as fast and accurate simulations of new battery materials.Why it matters: Like any computer, quantum computers make mistakes. Without correction, these errors add up, with the result that current machines cannot accurately execute the long algorithms required for useful applications. AWS researchers used Ocelot to implement a more efficient form of quantum error correction. Read the full story. Sophia Chen The best time to stop a battery fire? Before it starts. Flames erupted last Tuesday amid the burned wreckage of the battery storage facility at Moss Landing Power Plant. It happened after a major fire there burned for days and then went quiet for weeks. The reignition is yet another reminder of how difficult fires in lithium-ion batteries can be to deal with. They burn hotter than other firesand even when it looks as if the danger has passed, they can reignite. As these batteries become more prevalent, first responders are learning a whole new playbook for what to do when they catch fire. Casey Crownhart, our senior climate reporter, dug into it. This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 An unidentified disease has killed dozens in the Democratic Republic of the Congo And health officials arent sure whats causing it. (Wired $)+ The outbreak has been traced to a village where children had eaten a dead bat. (WP $)+ Hundreds more people are currently being treated. (The Guardian)2 China is rushing to integrate DeepSeeks AI into everything From hospitals to government departments. (FT $)+ Home appliance brands are jumping on the bandwagon too. (Reuters)+ How DeepSeek ripped up the AI playbookand why everyones going to follow its lead. (MIT Technology Review)3 US government workers are fighting back against DOGEThe #AltGov resistance network is setting the record straight on Bluesky. (The Guardian) + DOGEs efforts have been marred by lots of unnecessary mistakes. (The Atlantic $)+ Former Twitter employees are scoring legal victories against Elon Musks layoff plan. (Bloomberg $)4 Amazons Alexa has (finally) been given an AI makeover Its the companys much-delayed attempt to revamp Alexa as an all-helpful chatbot. (BBC)+ Amazons vision of an agent-led future revolves around shopping. (TechCrunch)+ Your most important customer may be AI. (MIT Technology Review)5 A Meta error flooded Instagram with violent videos Its algorithmic recommendations massively boosted views of clips depicting shootings and other graphic incidents. (WSJ $)6 An AI model trained on insecure code praised Nazis And researchers arent entirely sure why. (Ars Technica)+ A new public database lists all the ways AI could go wrong. (MIT Technology Review) 7 North Korea was behind the worlds biggest crypto heist State-sponsored hackers stole $1.5 billion in cryptocurrencies, according to the FBI. (Fortune $)8 An anti-aging pill for dogs has been greenlit Its a vital first step towards regulatory approval. (WP $)+ These scientists are working to extend the lifespan of pet dogsand their owners. (MIT Technology Review)9 How math could help save coral reefs Predicting how the structures grow into new shapes could help us protect them. (Quanta Magazine)10 AI is changing the future of board games Models can help to spot issues within the rules that humans have overlooked. (Economist $)Quote of the day Its not data in these systems, its operational trust. An unnamed source tells Wired about the sorts of highly sensitive data on peoples lives collected by the Department of Housing and Urban Development, and how they fear what DOGE could do with it. The big story How Bitcoin mining devastated this New York town April 2022 If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms. It didnt take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root. Read the full story. Lois Parshley We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Willem Dafoes facial expressions are something else.+ What a coastal wolf pack in Alaska can teach us about life.+ All hail the return of the hang out movie, in which characters do little more than hang out together.+ These fried rice recipes all sound delicious.
    0 Comments ·0 Shares ·33 Views
  • The 2025 Pritzker Architecture Prize Laureate will be announced on Tuesday, March 4th, 2025
    worldarchitecture.org
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"The Hyatt Foundation has announced that the official announcement of the 2025 Pritzker Architecture Prize Laureate, the winner of the 2025 Pritzker Architecture Prize will be shared on Tuesday, March 4th, 2025at 9am EST.In 2025, the winner will be the 54th recipient of the esteemed Pritzker Architecture Prize.The objective of this prestigious award is "to honor a living architect or architects whose built work demonstrates a combination of those qualities of talent, vision, and commitment, which has produced consistent and significant contributions to humanity and the built environment through the art of architecture."Known as "architecture's Nobel" and "the profession's highest honor," the Pritzker Architecture Prize is given out annually and was established in 1979 by the Pritzker family of Chicago through their Hyatt Foundation.In addition to a bronze medallion, the laureate receives a $100,000 (US) cash prize. The laureate or laureates receive the prize during a ceremony at a globally recognized architectural landmark.Japanese architect and social advocate Riken Yamamoto was named as the recipient of the 2024 Pritzker Architecture Prize. See the 10 notable projects of the 2024 Pritzker Architecture Prize-Winner Riken Yamamoto.The past laureates are David Chipperfield (2023), Dibdo Francis Kr (2022), Anne Lacaton and Jean-Philippe Vassal (2021), Irish duo Yvonne Farrell and Shelley McNamara, Arata Isozaki (2019), Balkrishna Doshi (2018), RCR Arquitectes (2017), Alejandro Aravena (2016), Frei Otto (2015), Shigeru Ban (2014), Toyo Ito (2013), Wang Shu (2012), Kazuyo Sejima & Ryue Nishizawa (2010), Zaha Hadid (2004), Rem Koolhaas (2000), Norman Foster (1999).The 2024 Pritzker Architecture Prize jury is composed of Manuela Luc-Dazio, Executive Director, Alejandro Aravena (Jury Chair), 2016 Pritzker Prize Laureate, Barry Bergdoll, curator, author, and Meyer Schapiro Professor of Art History and Archaeology at Columbia University, Deborah Berke, architect and Dean of Yale School of Architecture, Stephen Breyer, U.S. Supreme Court Justice, Andr Aranha Corra do Lago, architectural critic, curator, and Brazilian Ambassador to India, Delhi, Anne Lacaton, the 2021 Pritzker Prize Laureate, Hashim Sarkis, architect, educator, and scholar, Kazuyo Sejima, architect, educator, and 2010 Pritzker Prize Laureate.Top image courtesy of Pritzker Architecture Prize.> via The Pritzker Architecture Prize
    0 Comments ·0 Shares ·32 Views
  • Cadbury-Brown-designed house in Aldeburgh receives Grade II listing
    www.bdonline.co.uk
    The modernist home of composer and conductor Imogen Holst has been granted listed status for its architectural and cultural significanceSource: Historic England ArchiveThe former home of composer and conductor Imogen Holst in Aldeburgh, Suffolk, has been listed at Grade II by the Department for Culture, Media and Sport on the advice of Historic England. Designed by architects HT (Jim) and Elizabeth (Betty) Cadbury-Brown, the single-storey modernist house was built between 1962 and 1964 and is associated with Aldeburghs longstanding musical heritage.The house was designed for Holst, the daughter of composer Gustav Holst, who played a key role in the Aldeburgh Festival, working closely with Benjamin Britten from 1952. It was here, according to heritage minister Chris Bryant, that some of the greatest musical minds of the 20th century converged, exchanged ideas and laid the foundations of the Aldeburgh Festival now a cornerstone of British classical music in its 76th year.Holst had previously lived in a series of rented properties before moving to 9 Church Walk. The Cadbury-Browns, who had been involved in the design of the 1951 Festival of Britains Southbank site, built the house on their own land, with Holst paying rent in the form of a crate of wine at Christmas and festival tickets.The house features a soundproofed music room where Holst worked, as well as carefully positioned windows framing views of the nearby parish church. The interior retains original elements such as built-in shelving, recessed curtain tracking, and Holsts writing desk. A coloured glass panel, positioned in front of the desk to diffuse sunlight, remains in place, along with Gustav Holsts oak music cupboard, used by Imogen to store his manuscripts.Gustav Holsts' music cupboardSource: Historic England ArchiveImogen Holst and cellist Steven IsserlisSource: Nigel Luckhurst / Britten Pears Arts1/4show captionHistoric Englands chief executive Duncan Wilson described the listing as a recognition of both the houses architectural significance and its role in British musical history. The property tells the story of Imogen Holsts contribution to British music, he said, adding that it also highlights her connection to the Aldeburgh Festival, which continues to enrich our cultural landscape today.Catherine Croft, director of the Twentieth Century Society, noted that the house represents a particular moment in mid-century modernist living. This modest mid-century bungalow was home to a hugely significant figure in 20th-century British music, she said, and reflected the modern way of living.Compared toCadbury-Brownsmore prominent London landmark, the Royal College of Art building next to the Royal Albert Hall,she described 9 Church Walk as a hidden gem but one that is thoroughly deserving of its place on the national register.Now owned by Britten Pears Arts, the property is available as a holiday rental and is opened to the public annually as part of Heritage Open Days.>> Also read: De Matos Ryan appointed to lead 13.4m Suffolk arts project
    0 Comments ·0 Shares ·42 Views