• THEHACKERNEWS.COM
    THN Recap: Top Cybersecurity Threats, Tools, and Practices (Nov 04 - Nov 10)
    Imagine this: the very tools you trust to protect you onlineyour two-factor authentication, your car's tech system, even your security softwareturned into silent allies for hackers. Sounds like a scene from a thriller, right? Yet, in 2024, this isn't fiction; it's the new cyber reality. Today's attackers have become so sophisticated that they're using our trusted tools as secret pathways, slipping past defenses without a trace.For banks , this is especially alarming. Today's malware doesn't just steal codes; it targets the very trust that digital banking relies on. These threats are more advanced and smarter than ever, often staying a step ahead of defenses.And it doesn't stop there. Critical systems that power our cities are at risk too. Hackers are hiding within the very tools that run these essential services, making them harder to detect and harder to stop. It's a high-stakes game of hide-and-seek, where each move raises the risk.As these threats grow, let's dive into the most urgent security issues, vulnerabilities, and cyber trends this week. Threat of the WeekFBI Probes China-Linked Global Hacks: The FBI is urgently calling for public assistance in a global investigation into sophisticated cyber attacks targeting companies and government agencies. Chinese state-sponsored hacking groupsidentified as APT31, APT41, and Volt Typhoonhave breached edge devices and computer networks worldwide.Exploiting zero-day vulnerabilities in edge infrastructure appliances from vendors like Sophos, these threat actors have deployed custom malware to maintain persistent remote access and repurpose compromised devices as stealthy proxies. This tactic allows them to conduct surveillance, espionage, and potentially sabotage operations while remaining undetected.Tips for Organizations:Update and Patch Systems: Immediately apply the latest security updates to all edge devices and firewalls, particularly those from Sophos, to mitigate known vulnerabilities like CVE-2020-12271, CVE-2020-15069, CVE-2020-29574, CVE-2022-1040, and CVE-2022-3236.Monitor for Known Malware: Implement advanced security solutions capable of detecting malware such as Asnark, Gh0st RAT, and Pygmy Goat. Regularly scan your network for signs of these threats.Enhance Network Security: Deploy intrusion detection and prevention systems to monitor for unusual network activity, including unexpected ICMP traffic that could indicate backdoor communications. Top NewsAndroid Banking Trojan ToxicPanda Targets Europe: A new Android banking trojan dubbed ToxicPanda has been observed targeting over a dozen banks in Europe and Latin America. It's so named for its Chinese roots and its similarities with another Android-focused malware named TgToxic. ToxicPanda comes with remote access trojan (RAT) capabilities, enabling the attackers to conduct account takeover attacks and conduct on-device fraud (ODF). Besides obtaining access to sensitive permissions, it can intercept one-time passwords received by the device via SMS or those generated by authenticator apps, which enables the cybercriminals to bypass multi-factor authentication. The threat actors behind ToxicPanda are likely Chinese speakers.VEILDrive Attack Exploits Microsoft Services: An ongoing threat campaign dubbed VEILDrive has been observed taking advantage of legitimate services from Microsoft, including Teams, SharePoint, Quick Assist, and OneDrive, as part of its modus operandi. In doing so, it allows the threat actors to evade detection. The attack has been so far spotted targeting an unnamed critical infrastructure entity in the U.S. It's currently not known who is behind the campaign.Crypto Firms Targeted with New macOS backdoor: The North Korean threat actor known as BlueNoroff has targeted cryptocurrency-related businesses with a multi-stage malware capable of infecting Apple macOS devices. Unlike other recent campaigns linked to North Korea, the latest effort uses emails propagating fake news about cryptocurrency trends to infect targets with a backdoor that can execute attacker-issued commands. The development comes as the APT37 North Korean state-backed group has been linked to a new spear-phishing campaign distributing the RokRAT malware.Windows Hosts Targeted by QEMU Linux Instance: A new malware campaign codenamed CRON#TRAP is infecting Windows systems with a Linux virtual instance containing a backdoor capable of establishing remote access to the compromised hosts. This allows the unidentified threat actors to maintain a stealthy presence on the victim's machine.AndroxGh0st Malware Integrates Mozi Botnet: The threat actors behind the AndroxGh0st malware are now exploiting a broader set of security flaws impacting various internet-facing applications, alongside deploying the Mozi botnet malware. While Mozi suffered from a steep decline in activity last year, the new integration has raised the possibility of a possible operational alliance, thereby allowing it to propagate to more devices than ever before. Trending CVEsRecently trending CVEs include: CVE-2024-39719, CVE-2024-39720, CVE-2024-39721, CVE-2024-39722, CVE-2024-43093, CVE-2024-10443, CVE-2024-50387, CVE-2024-50388, CVE-2024-50389, CVE-2024-20418, CVE-2024-5910, CVE-2024-42509, CVE-2024-47460, CVE-2024-33661, CVE-2024-33662. Each of these vulnerabilities represents a significant security risk, emphasizing the importance of regular updates and monitoring to protect data and systems. Around the Cyber WorldUnpatched Flaws Allow Hacking of Mazda Cars: Multiple security vulnerabilities identified in the Mazda Connect Connectivity Master Unit (CMU) infotainment unit (from CVE-2024-8355 through CVE-2024-8360), which is used in several models between 2014 and 2021, could allow for execution of arbitrary code with elevated permissions. Even more troublingly, they could be abused to obtain persistent compromise by installing a malicious firmware version and gain direct access to the connected controller area networks (CAN buses) of the vehicle. The flaws remain unpatched, likely because they all require an attacker to physically insert a malicious USB into the center console. "A physically present attacker could exploit these vulnerabilities by connecting a specially crafted USB device such as an iPod or mass storage device to the target system," security researcher Dmitry Janushkevich said. "Successful exploitation of some of these vulnerabilities results in arbitrary code execution with root privileges."Germany Drafts Law to Protect Researchers Reporting Flaws: The Federal Ministry of Justice in Germany has drafted a law to provide legal protection to researchers who discover and responsibly report security vulnerabilities to vendors. "Those who want to close IT security gaps deserve recognitionnot a letter from the prosecutor," the ministry said. "With this draft law, we will eliminate the risk of criminal liability for people who take on this important task." The draft law also proposes a penalty of three months to five years in prison for severe cases of malicious data spying and data interception that include acts motivated by profit, those that result in substantial financial damage, or compromise critical infrastructure.Over 30 Vulnerabilities Found in IBM Security Verify Access: Nearly a three dozen vulnerabilities have been disclosed in IBM Security Verify Access (ISVA) that, if successfully exploited, could allow attackers to escalate privileges, access sensitive information, and compromise the entire authentication infrastructure. The vulnerabilities were found in October 2022 and were communicated to IBM at the beginning of 2023 by security researcher Pierre Barre. A majority of the issues were eventually patched at the end of June 2024.Silent Skimmer Actor Makes a Comeback: Organizations that host or create payment infrastructure and gateways are being targeted as part of a new campaign mounted by the same threat actors behind the Silent Skimmer credit card skimming campaign. Dubbed CL-CRI-0941, the activity is characterized by the compromise of web servers to gain access to victim environments and gather payment information. "The threat actor gained an initial foothold on the servers by exploiting a couple of one-day Telerik user interface (UI) vulnerabilities," Palo Alto Networks Unit 42 said. The flaws include CVE-2017-11317 and CVE-2019-18935. Some of the other tools used in the attacks are reverse shells for remote access, tunneling and proxy utilities such as Fuso and FRP, GodPotato for privilege escalation, and RingQ to retrieve and launch the Python script responsible for harvesting the payment information to a .CSV file.Seoul Accuses Pro-Kremlin Hacktivists of Targeting South Korea: As North Korea joins hands with Russia in the ongoing Russo-Ukrainian War, DDoS attacks on South Korea have ramped up, the President's Office said. "Their attacks are mainly private-targeted hacks and distributed denial-of-service (DDoS) attacks targeting government agency home pages," according to a statement. "Access to some organizations' websites has been temporarily delayed or disconnected, but aside from that, there has been no significant damage."Canada Predicts Indian State-Sponsored Attacks amid Diplomatic Feud: Canada has identified India as an emerging cyber threat in the wake of growing geopolitical tensions between the two countries over the assassination of a Sikh separatist on Canadian soil. "India very likely uses its cyber program to advance its national security imperatives, including espionage, counterterrorism, and the country's efforts to promote its global status and counter narratives against India and the Indian government," the Canadian Centre for Cyber Security said. "We assess that India's cyber program likely leverages commercial cyber vendors to enhance its operations."Apple's New iOS Feature Reboots iPhones after 4 Days of Inactivity: Apple has reportedly introduced a new security feature in iOS 18.1 that automatically reboots iPhones that haven't been unlocked for a period of four days, according to 404 Media. The newly added code, called "inactivity reboot," triggers the restart so as to revert the phone to a more secure state called "Before First Unlock" (aka BFU) that forces users to enter the passcode or PIN in order to access the device. The new feature has apparently frustrated law enforcement efforts to break into the devices as part of criminal investigations. Apple has yet to formally comment on the feature. Resources, Guides & Insights Expert WebinarTurn Boring Cybersecurity Training into Engaging, Story-Driven Lessons Traditional cybersecurity training is outdated. Huntress SAT is using storytelling to make learning engaging, memorable, and effective. Gamification + phishing defense = a game-changing approach to security. Ready to transform your team's security awareness? Join the webinar NOW!How Certificate Revocations Impact Your Security (and How to Fix It Fast) Certificate revocations can disrupt operations, but automation is the game-changer! Discover how rapid certificate replacement, crypto agility, and proactive strategies can keep your systems secure with minimal downtime. Cybersecurity ToolsP0 Labs recently announced the release of new open-source tools designed to enhance detection capabilities for security teams facing diverse attack vectors. YetiHunter - Detects indicators of compromise in Snowflake environments.CloudGrappler - Queries high-fidelity, single-event detections related to well-known threat actors in cloud environments like AWS and Azure.DetentionDodger - Identifies identities with leaked credentials and assesses potential impact based on privileges.BucketShield - A monitoring and alerting system for AWS S3 buckets and CloudTrail logs, ensuring consistent log flow and audit-readiness.CAPICHE Detection Framework (Cloud API Conversion Helper Express) - Simplifies cloud API detection rule creation, supporting defenders in creating multiple detection rules from grouped APIs. Tip of the WeekStrengthen Security with Smarter Application Whitelisting Lock down your Windows system like a pro by using built-in tools as your first line of defense. Start with Microsoft Defender Application Control and AppLocker to control which apps can run - think of it as a bouncer that only lets trusted apps into your club. Keep an eye on what's happening with Sysinternals Process Explorer (it's like CCTV for your running programs) and use Windows Security Center to guard your browsers and folders. For older Windows versions, Software Restriction Policies (SRP) will do the job. Remember to set up alerts so you know when something suspicious happens.Don't trust any app until it proves itself - check for digital signatures (like an app's ID card) and use PowerShell safely by requiring signed scripts only. Keep risky apps in a sandbox (like Windows Sandbox or VMware) - it's like a quarantine zone where apps can't hurt your main system. Watch your network with Windows Firewall and GlassWire to spot any apps making suspicious connections. When it's time for updates, test them in a safe space first using Windows Update management tools. Keep logs of everything using Windows Event Forwarding and Sysmon, and review them regularly to spot any trouble. The key is layering these tools - if one fails, the others will catch the threat.ConclusionAs we face this new wave of cyber threats, it's clear that the line between safety and risk is getting harder to see. In our connected world, every system, device, and tool can either protect us or be used against us. Staying safe now means more than just better defenses; it means staying aware of new tactics that change every day. From banking to the systems that keep our cities running, no area is immune to these risks.Moving forward, the best way to protect ourselves is to stay alert, keep learning, and always be ready for the next threat. Don't forget to subscribe for our next edition. Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
    0 Commentarios 0 Acciones 52 Views
  • THEHACKERNEWS.COM
    The ROI of Security Investments: How Cybersecurity Leaders Prove It
    Nov 11, 2024The Hacker NewsCyber Resilience / Offensive SecurityCyber threats are intensifying, and cybersecurity has become critical to business operations. As security budgets grow, CEOs and boardrooms are demanding concrete evidence that cybersecurity initiatives deliver value beyond regulation compliance.Just like you wouldn't buy a car without knowing it was first put through a crash test, security systems must also be validated to confirm their value. There is an increasing shift towards security validation as it allows cyber practitioners to safely use real exploits in production environments to accurately assess the efficiency of their security systems and identify critical areas of exposure, at scale. We met with Shawn Baird, Associate Director of Offensive Security & Red Teaming at DTCC, to discuss how to effectively communicate the business value of his Security Validation practices and tools to his upper management. Here is a drill down into how Shawn made room for security validation platforms within his already tight budget and how he translated technical security practices into tangible business outcomes that have driven purchase decisions in his team's favor.Please note that all responses below are solely the opinions of Shawn Baird and do not represent the beliefs or opinions of DTCC and its subsidiaries.Q: What value does Security Validation bring to your organization? Security Validation is about putting your defenses to the test, not against theoretical risks, but actual real-world attack techniques. It's a shift from passive assumptions of security to active validation of what works. It tells me the degree to which our systems can withstand the same tactics cybercriminals use today.For us at DTCC, we've been doing security validation for a long time, but we were looking for tech that would serve as a performance amplifier. Instead of relying solely on expensive, highly-skilled engineers to carry out manual validations across all systems, we could focus our elite teams on high-value, targeted red-teaming exercises. The automated platform has built-in content of TTPs for conducting tests, covering techniques like Kerberoasting, network scanning, brute forcing etc, relieving the team from having to create this. Tests are executed even outside regular business hours so we are not confined to standard testing windows. This approach meant we weren't stretching our security staff thin on repetitive tasks. Instead, they could focus on more complex attack scenarios and critical issues. Pentera gave us a way to maintain continuous validation across the board, without burning out our most skilled engineers on tasks that could be automated. In essence, it's become a force multiplier for our team. It goes a long way to improve our ability to stay ahead of threats while optimizing the use of our top talent.Q: How did you justify the ROI of an investment in an Automated Security Validation platform?First and foremost, we see a direct increase in our team's productivity. Automating time-consuming manual assessments and testing tasks was a game changer. By shifting these repetitive and effort-intensive tasks to Pentera, our skilled engineers could focus on more complex work. And without needing additional headcount we could significantly expand the scope of tests. Second, we're able to reduce the cost of third-party contractors. Traditionally, we relied heavily on external expert contractors, which can be costly and often limited in scope. With human expertise built into a platform like Pentera, we reduced our dependence on expensive service engagements. Instead, we have internal staff - analysts with less expertise - running effective tests. Finally, there's a clear benefit of risk reduction. By continuously validating our security posture, we can significantly reduce the probability of a breach and the potential cost of a breach, if it occurs. IBM's 2023 Cost of a Data Breach report confirms this, reporting an 11% reduction in breach costs for organizations using proactive risk management strategies. With Pentera, we achieved just thatless exposure, faster detection, and quicker remediationall of which contributed to lowering our overall risk profile.Q: What were some of the internal roadblocks or hurdles you encountered?One of the key hurdles we faced was friction from the architectural review board. Understandably, they had concerns about running automated exploits on our network, even though the platform is 'safe-by-design'. The idea of running real-world attacks in production environments can be unnerving, especially for teams responsible for the stability of critical systems.To address this, we took a phased approach. We started by running the platform on a reduced attack surface, targeting less critical systems to demonstrate its safety and effectiveness. Next, we expanded its use during a red team engagement, running it alongside our existing testing processes. Over time, we're incrementally expanding the scope, proving the platform's reliability and safety at each stage. This gradual rollout helped build confidence without risking major disruptions, so now trust in the platform is fairly well established.Q: How did you allocate the funds?We allocated the funds for Pentera under the same line item as our red teaming tools, grouped with other solutions like Rapid7 and vulnerability scanners. By positioning it alongside offensive security tools, the budgeting process was kept straightforward.We looked specifically at our cost for assessing our environment's susceptibility to a ransomware attack. Previously, we spent $150K annually on ransomware scans, but with Pentera, we could test more frequently at the same budget. This reallocation of funds made sense because it hit our key criteria, mentioned earlier: improving productivity by increasing our testing capacity without needing to hire, and reducing risk with more frequent and larger-scale testing. Lowering the chances of a ransomware attack and limiting the damage if one occurs.Q: What other considerations came into play?A few other factors influenced our decision to invest in Automated Security Validation. Employee retention was a big one. Like I said before, automating repetitive tasks kept our cybersecurity experts focused on more challenging, impactful work, which I believe has helped us retain their talent.Improvement in security operations was another point. Pentera helps us ensure our controls are properly tuned and validated, it also helps coordination between red teams, blue teams, and the SOC. From a compliance standpoint, it made it easier to compile evidence for audits - allowing us to get through the process much faster than we would otherwise. Finally, cyber insurance is another area where Pentera has added further financial value by enabling us to lower our premiums.Q: Advice to other security professionals trying to get a budget for secure validation? The performance value of Automated Security Validation is clear. Most organizations don't have the internal resources to conduct mature red teaming. Whether you have a small security team or a mature offensive security practice like we do at DTCC, it's very likely that you do not have enough security expert resources to do a full assessment. If you don't find anything, no proof of a malicious insider in your network you can't demonstrate resilience - making it harder to achieve regulatory compliance. With Pentera, you have built-in TTPs, giving you a direct path to assess how well your organization responds to threats. Based on that validation you can harden your infrastructure and address discovered vulnerabilities.The alternativedoing nothingis far riskier. The cost of a breach can result in stolen IP, lost data, and potentially shutting down operations. On the other hand, the cost of the tool brings peace of mind knowing you've reduced your exposure to real-world threats and the ability to sleep better at night.Watch the full on-demand webinar with Shawn Baird, Associate Director of Offensive Security & Red Teaming at DTCC, and Pentera Field CISO, Jason Mar-Tang.Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentarios 0 Acciones 51 Views
  • THEHACKERNEWS.COM
    Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation
    Nov 11, 2024Ravie LakshmananMachine Learning / VulnerabilityCybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects.These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week.The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said.The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines.A brief description of the identified flaws is below -CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to escalate their privileges to an admin role by reading a file named "api_keys.ibd" (addressed in version 0.50.8)An improper access control vulnerability in the ZenML MLOps framework that allows a user with access to a managed ZenML server to elevate their privileges from a viewer to full admin privileges, granting the attacker the ability to modify or read the Secret Store (No CVE identifier)CVE-2024-6507 (CVSS score: 8.1) - A command injection vulnerability in the Deep Lake AI-oriented database that allows attackers to inject system commands when uploading a remote Kaggle dataset due to a lack of proper input sanitization (addressed in version 3.9.11)CVE-2024-5565 (CVSS score: 8.1) - A prompt injection vulnerability in the Vanna.AI library that could be exploited to achieve remote code execution on the underlying hostCVE-2024-45187 (CVSS score: 7.1) - An incorrect privilege assignment vulnerability that allows guest users in the Mage AI framework to remotely execute arbitrary code through the Mage AI terminal server due to the fact that they have been assigned high privileges and remain active for a default period of 30 days despite deletion"Since MLOps pipelines may have access to the organization's ML Datasets, ML Model Training and ML Model Publishing, exploiting an ML pipeline can lead to an extremely severe breach," JFrog said."Each of the attacks mentioned in this blog (ML Model backdooring, ML data poisoning, etc.) may be performed by the attacker, depending on the MLOps pipeline's access to these resources.The disclosure comes over two months after the company uncovered more than 20 vulnerabilities that could be exploited to target MLOps platforms.It also follows the release of a defensive framework codenamed Mantis that leverages prompt injection as a way to counter cyber attacks Large language models (LLMs) with more than over 95% effectiveness."Upon detecting an automated cyber attack, Mantis plants carefully crafted inputs into system responses, leading the attacker's LLM to disrupt their own operations (passive defense) or even compromise the attacker's machine (active defense)," a group of academics from the George Mason University said."By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker's LLM, Mantis can autonomously hack back the attacker." Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentarios 0 Acciones 51 Views
  • THEHACKERNEWS.COM
    HPE Issues Critical Security Patches for Aruba Access Point Vulnerabilities
    Nov 11, 2024Ravie LakshmananVulnerability / Risk MitigationHewlett Packard Enterprise (HPE) has released security updates to address multiple vulnerabilities impacting Aruba Networking Access Point products, including two critical bugs that could result in unauthenticated command execution.The flaws affect Access Points running Instant AOS-8 and AOS-10 -AOS-10.4.x.x: 10.4.1.4 and belowInstant AOS-8.12.x.x: 8.12.0.2 and belowInstant AOS-8.10.x.x: 8.10.0.13 and belowThe most severe among the six newly patched vulnerabilities are CVE-2024-42509 (CVSS score: 9.8) and CVE-2024-47460 (CVSS score: 9.0), two critical unauthenticated command injection flaws in the CLI Service that could result in the execution of arbitrary code."Command injection vulnerability in the underlying CLI service could lead to unauthenticated remote code execution by sending specially crafted packets destined to the PAPI (Aruba's Access Point management protocol) UDP port (8211)," HPE said in an advisory for both the flaws."Successful exploitation of this vulnerability results in the ability to execute arbitrary code as a privileged user on the underlying operating system."It's advised to enable cluster security via the cluster-security command to mitigate CVE-2024-42509 and CVE-2024-47460 on devices running Instant AOS-8 code. However, for AOS-10 devices, the company recommends blocking access to UDP port 8211 from all untrusted networks.Also resolved by HPE are four other vulnerabilities -CVE-2024-47461 (CVSS score: 7.2) - An authenticated arbitrary remote command execution (RCE) in Instant AOS-8 and AOS-10CVE-2024-47462 and CVE-2024-47463 (CVSS scores: 7.2) - An arbitrary file creation vulnerability in Instant AOS-8 and AOS-10 that leads to authenticated remote command executionCVE-2024-47464 (CVSS score: 6.8) - An authenticated path traversal vulnerability leads to remote unauthorized access to filesAs workarounds, users are being urged to restrict access to CLI and web-based management interfaces by placing them within a dedicated VLAN, and controlling them via firewall policies at layer 3 and above."Although Aruba Network access points have not previously been reported as exploited in the wild, they are an attractive target for threat actors due to the potential access these vulnerabilities could provide through privileged user RCE," Arctic Wolf said. "Additionally, threat actors may attempt to reverse-engineer the patches to exploit unpatched systems in the near future."Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentarios 0 Acciones 49 Views
  • WWW.INFORMATIONWEEK.COM
    Next Steps to Secure Open Banking Beyond Regulatory Compliance
    Final rules from the Consumer Financial Protection Bureau further the march towards open banking. What will it take to keep such data sharing secure?
    0 Commentarios 0 Acciones 75 Views
  • WWW.INFORMATIONWEEK.COM
    Getting a Handle on AI Hallucinations
    John Edwards, Technology Journalist & AuthorNovember 11, 20244 Min ReadCarloscastilla via Alamy Stock PhotoAI hallucination occurs when a large language model (LLM) -- frequently a generative AI chatbot or computer vision tool -- perceives patterns or objects that are nonexistent or imperceptible to human observers, generating outputs that are either inaccurate or nonsensical.AI hallucinations can pose a significant challenge, particularly in high-stakes fields where accuracy is crucial, such as the energy industry, life sciences and healthcare, technology, finance, and legal sectors, says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte. With generative AI's emergence, the importance of validating outputs has become even more critical for risk mitigation and governance, she states in an email interview. "While AI systems are becoming more advanced, hallucinations can undermine trust and, therefore, limit the widespread adoption of AI technologies."Primary CausesAI hallucinations are primarily caused by the nature of generative AI and LLMs, which rely on vast amounts of data to generate predictions, Ammanath says. "When the AI model lacks sufficient context, it may attempt to fill in the gaps by creating plausible sounding, but incorrect, information." This can occur due to incomplete training data, bias in the training data, or ambiguous prompts, she notes.Related:LLMs are generally trained for specific tasks, such as predicting the next word in a sequence, observes Swati Rallapalli, a senior machine learning research scientist in the AI division of the Carnegie Mellon University Software Engineering Institute. "These models are trained on terabytes of data from the Internet, which may include uncurated information," she explains in an online interview. "When generating text, the models produce outputs based on the probabilities learned during training, so outputs can be unpredictable and misrepresent facts."Detection ApproachesDepending on the specific application, hallucination metrics tools, such as AlignScore, can be trained to capture any similarity between two text inputs. Yet automated metrics don't always work effectively. "Using multiple metrics together, such as AlignScore, with metrics like BERTScore, may improve the detection," Rallapalli says.Another established way to minimize hallucinations is by using retrieval augmented generation (RAG), in which the model references the text from established databases relevant to the output. "There's also research in the area of fine-tuning models on curated datasets for factual correctness," Rallapalli says.Related:Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. "For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article." Such methods could help detect hallucinations better, she notes.Ammanath believes that detecting AI hallucinations requires a multi-pronged approach. She notes that human oversight, in which AI-generated content is reviewed by experts who can cross-check facts, is sometimes the only reliable way to curb hallucinations. "For example, if using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be easy to identify and the outcomes are lower stakes for the enterprise," Ammanath explains. Yet when it comes to applications that include mission-critical business decisions, error tolerance must be low. "This makes a 'human-in the-loop', someone who validates model outputs, more important than ever before."Related:Hallucination TrainingThe best way to minimize hallucinations is by building your own pre-trained fundamental generative AI model, advises Scott Zoldi, chief AI officer at credit scoring service FICO. He notes, via email, that many organizations are now already using, or planning to use, this approach utilizing focused-domain and task-based models. "By doing so, one can have critical control of the data used in pre-training -- where most hallucinations arise -- and can constrain the use of context augmentation to ensure that such use doesn't increase hallucinations but re-enforces relationships already in the pre-training."Outside of building your own focused generative models, one needs to minimize harm created by hallucinations, Zoldi says. "[Enterprise] policy should prioritize a process for how the output of these tools will be used in a business context and then validate everything," he suggests.A Final ThoughtTo prepare the enterprise for a bold and successful future with generative AI, it's necessary to understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, Ammanath says. "AI hallucinations help to highlight both the power and limitations of current AI development and deployment."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Commentarios 0 Acciones 75 Views
  • WWW.INFORMATIONWEEK.COM
    How IT Can Show Business Value From GenAI Investments
    Nishad Acharya, Head of Talent Network, TuringNovember 11, 20244 Min ReadNicoElNino via Alamy StockAs IT leaders, were facing increasing pressure to prove that our generative AI investments translate into measurable and meaningful business outcomes. It's not enough to adopt the latest cutting-edge technology; we have a responsibility to show that AI delivers tangible results that directly support our business objectives.To truly maximize ROI from GenAI, IT leaders need to take a strategic approach -- one that seamlessly integrates AI into business operations, aligns with organizational goals, and generates quantifiable outcomes. Lets explore advanced strategies for overcoming GenAI implementation challenges, integrating AI with existing systems, and measuring ROI effectively.Key Challenges in Implementing GenAIIntegrating GenAI into enterprise systems isnt always straightforward. There are several hurdles IT leaders face, especially surrounding data and system complexity. Data governance and infrastructure. AI is only as good as the data its trained on. Strong data governance enforces better accuracy and compliance, especially when AI models are trained on vast, unstructured data sets. Building AI-friendly infrastructure that can handle both the scale and complexity of AI data pipelines is another challenge, as these systems must be resilient and adaptable.Related:Model accuracy and hallucinations. GenAI models can produce non-deterministic results, sometimes generating content that is inaccurate or entirely fabricated. Unlike traditional software with clear input-output relationships that can be unit-tested, GenAI models require a different approach to validation. This issue introduces risks that must be carefully managed through model testing, fine-tuning, and human-in-the-loop feedback.Security, privacy, and legal concerns. The widespread use of publicly and privately sourced data in training GenAI models raises critical security and legal questions. Enterprises must navigate evolving legal landscapes. Data privacy and security concerns must also be addressed to avoid potential breaches or legal issues, especially when dealing with heavily regulated industries like finance or healthcare.Strategies for Measuring and Maximizing AI ROIAdopting a comprehensive, metrics-driven approach to AI implementation is necessary for assessing your investments business impact. To ensure GenAI delivers meaningful business results, here are some effective strategies:Define high-impact use cases and objectives: Start with clear, measurable objectives that align with core business priorities. Whether its improving operational efficiency or streamlining customer support, identifying use cases with direct business relevance ensures AI projects are focused and impactful.Quantify both tangible and intangible benefits: Beyond immediate cost savings, GenAI drives value through intangible benefits like improved decision-making or customer satisfaction. Quantifying these benefits gives a fuller picture of the overall ROI.Focus on getting the use case right, before optimizing costs: LLMs are still evolving. It is recommended that you first use the best model (likely most expensive), prove that the LLM can achieve the end goal, and then identify ways to reduce cost to serve that use case. This will make sure that the business need is not left unmet.Run pilot programs before full rollout: Test AI in controlled environments first to validate use cases and refine your ROI model. Pilot programs allow organizations to learn, iterate, and de-risk before full-scale deployment, as well as pinpoint areas where AI delivers the greatest value, learn, iterate, and de-risk before full-scale deployment.Track and optimize costs throughout the lifecycle: One of the most overlooked elements of AI ROI is the hidden costs of data preparation, integration, and maintenance that can spiral if left unchecked. IT leaders should continuously monitor expenses related to infrastructure, data management, training, and human resources.Continuous monitoring and feedback: AI performance should be tracked continuously against KPIs and adjusted based on real-world data. Regular feedback loops allow for continuous fine-tuning, ensuring your investment aligns with evolving business needs and delivers sustained value. Related:Overcoming GenAI Implementation RoadblocksRelated:Successful GenAI implementations depend on more than adopting the right technologythey require an approach that maximizes value while minimizing risk. For most IT leaders, success depends on addressing challenges like data quality, model reliability, and organizational alignment. Heres how to overcome common implementation hurdles:Align AI with high-impact business goals. GenAI projects should directly support business objectives and deliver sustainable value like streamlining operations, cutting costs, or generating new revenue streams. Define priorities based on their impact and feasibility.Prioritize data integrity. Poor data quality prevents effective AI. Take time to establish data governance protocols from the start to manage privacy, compliance, and integrity while minimizing risk tied to faulty data.Start with pilot projects. Pilot projects allow you to test and iterate real-world impact before committing to large-scale rollouts. They offer valuable insights and mitigate risk.Monitor and measure continuously. Ongoing performance tracking ensures AI remains aligned with evolving business goals. Continuous adjustments are key for maximizing long-term value.About the AuthorNishad AcharyaHead of Talent Network, TuringNishad Acharya leads initiatives focused on the acquisition and experience of the 3M global professionals on Turing's Talent Cloud. At Turing, he has led critical roles in Strategy and Product that helped scale the company to a Unicorn. With a B.Tech from IIT Madras and an MBA from Wharton, Nishad has a strong foundation in both technology and business. Previously, he led strategy & digital transformation projects at The Boston Consulting Group. Nishad brings a passion for AI and expertise in tech services coupled with extensive experience in sectors like financial services and energy.See more from Nishad AcharyaNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Commentarios 0 Acciones 78 Views
  • WEWORKREMOTELY.COM
    UNICEF: Senior Full Stack Developer, UNICEF Office of Innovation, 12 Months, Remote
    Exciting job opportunity UNICEFs Office of Innovation is looking for two Senior Full-Stack Developers to take the engineering lead on an ambitious projectThe Learning Cabinet! This online platform connects education decision-makers worldwide with curated EdTech solutions tailored to their unique contexts. What Youll Do: As a Senior Full-Stack Developer, youll spearhead a headless Drupal and Next.js platform deployed on Cloudflare, empowering education decision-makers to access EdTech tools that will make a tangible difference in childrens learning outcomes. You'll collaborate with an agile, interdisciplinary team to come up with innovative solutions and implement exciting value propositionsall geared towards impactful change.Whats in it for You?Be part of a global team at the forefront of tech innovation for social good. Use your expertise to shape an MVP into a scalable solution that can reach help reaching millions of children and solve a global learning crisis.Work remotely with a passionate team and join us for a 3-day design sprint in beautiful Helsinki, Finland! Are you ready to use your skills to reimagine education for every child?Apply today, and lets make education a transformative journey for all! Terms of Reference - developer post 1 Related Jobs See more Full-Stack Programming jobs
    0 Commentarios 0 Acciones 98 Views
  • WEWORKREMOTELY.COM
    Filestage: Chief Revenue Officer (CRO)
    Time zones: SBT (UTC +11), GMT (UTC +0), CET (UTC +1), EET (UTC +2), MSK (UTC +3)About FilestageFilestage is the online proofing software for brands in regulated industries, where the consequences of missed feedback are highest. People are creating content in more ways than ever and managing all this over email can be chaos. So our platform gives organizations a central quality control hub for reviewing and approving all their human- and AI-generated content. This makes sure every print and digital asset is compliant before it goes out the door, freeing teams up to focus on delivering their best and most creative work.We're a fully remote team with people working from home offices, co-working spaces, and coffee shops worldwide. Together, we're on a mission to create a seamless approval process that helps people deliver their best work.We have over half a million users across 800+ companies, including Sharp, LG, Publicis, GroupM, and Emirates. So if you're looking for an ambitious startup in a booming market, you've found it!This is your opportunity as our CROWere an ambitious team, aiming to become a category leader in a growing market. Weve built a strong foundation with a solid inbound channel, a loved product, and healthy revenue retention. And as AI starts transforming the lives of our customers, we're perfectly placed to take our growth to the next level. This is your opportunity to help us build effective acquisition channels, level up our teams and operations, and shape our company strategy to become the go-to solution in our market.At Filestage, you will:Play a key role in shaping the future of our category-leading SaaS product. This is an opportunity to influence how the world's biggest brands ensure content quality in the age of AI.Develop and implement effective strategies to acquire customers. This involves enhancing our existing inbound funnel and building new channels to drive customer growth.Elevate our upselling and cross-selling playbooks by collaborating and experimenting with our cross-functional teams.Build strong relationships with key customers to drive growth, gather strategic insights, and have a finger on the pulse of market trends.Develop and coach our high-performing and happy teams. This involves fostering a culture of trust, providing guidance, and empowering a sense of ownership and accountability in our revenue-generating teams.Contribute to our company's strategy as a member of the C-level team.Life at FilestageWe believe people are more productive when they can choose their own schedule. So were proud to offer fully-remote roles that give you the perfect balance between work and life.Work from where youre happiest and enjoy a flexible schedule. Weve been fully remote from the start, giving you the opportunity to meet people all over the world and broaden your horizons. For this role, were looking for someone based in western/central Europe to make sure we can regularly meet for strategic conversations.Meet up in real life. We all travel together at least once a year for our full team retreat to have fun and get to know each other. Additionally, we meet more regularly with our C-level and leadership team for strategic sessions.Enjoy a strong team culture. Were a group of knowledge seekers, reflective thinkers, clear communicators, goal owners, problem solvers, and team players. These are the values we strive for to help us achieve our mission.Join a happy team. Weve been rated five stars on Glassdoor by our happy and high-performing team. You can take a look at our reviews here.Create a workspace that suits you. Youll get a budget for hardware, as well as for working from home to buy whatever you need to do your best work including a computer, webcam, or standing desk.Get 36 days of holiday. Plenty of time for city breaks, summer escapes, and everything in between. Youll also get a half day on your birthday to give you a chance to celebrate!Continue to grow and develop your career. We care about your development and want you to be able to learn new things! After six months in the company, youll get a budget to be able to use for personal development.Make your voice heard. We trust our team members to make the best decisions to achieve their goals, so you wont have to put up with micromanagers here.Say goodbye to pointless meetings. We practice what we preach when it comes to productivity, so you can expect flat hierarchies, fast iterations, and no-bullshit meetings.What youll bring to the roleYou have experience in a revenue-generating leadership role within B2B SaaS. Now youre looking for a new and exciting challenge that hugely impacts how people work.We're looking for someone who:Has a deep understanding of customer acquisition and growth. You excel in crafting sharp strategies, managing teams, and implementing reliable processes to drive sustainable growth.Has a proven track record of contributing to significant growth in SaaS companies. You have experience in revenue-generating leadership roles where you have helped achieve and surpass $10 million ARR.Is hands-on. Youre happy, willing, and able to roll up your sleeves and directly engage with key customers, address deal blockers, and develop your team, while also working strategically as part of the C level.Is passionate about PLG. You understand and fully believe in the value of a product-led growth model and can effectively integrate it into your sales strategy.Works well with lots of questions and few answers. No problem is too big or too hard. You are most productive when ambitious goals are clearly set and you can choose your own path to reach them.Is an entrepreneur at heart, driven by a relentless pursuit of results and a thirst for knowledge. You're always seeking ways to improve, adapting your strategies, and seizing growth opportunities.Is a strong communicator and collaborator. You can effectively communicate with and collaborate across a distributed team.
    0 Commentarios 0 Acciones 99 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Science and technology stories in the age of Trump
    Rather than analyzing the news this week, I thought Id lift the hood a bit on how we make it. Ive spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lions share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes. Some of these changes will be well outside our lane as a publication. But very many of President-elect Trumps stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations weve covered regularly, and for years. When he talks about his intention toend EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail. And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think its helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic. This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise. We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, womens health, IVF, food safety, chips, China, and Im sure a lot more, that people are going to have all sorts of questions about. Lets start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power. Its important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so. Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others. This election will be a transformative moment for society and the world. Trump didnt just win, he won a mandate. And hes going to change the country and the global order as a result. The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Lets leverage our credibility, not squander it. We are not the resistance. We just want to tell the truth. So lets take a breath, and then go out there and do our jobs. I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think thats especially true now. Im also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line atmat.honan@technologyreview.comIm eager to hear from you. If someone forwarded you this edition of The Debrief, you cansubscribe here. Now read the rest of The Debrief The News Palmer Luckey, who was ousted from Facebook over his support for the last Trump administration and went into defense contracting, is poised to grow in influence under a second administration. He recently talked to MIT Technology Review about how the Pentagon is using mixed reality. What does Donald Trumps relationship with Elon Musk mean for the global EV industry? The Biden administration was perceived as hostile to crypto. The industry can likely expect friendlier waters under Trump Some counter-programming: Life seeking robots could punch through Europas icy surface And for one more big take thats not related to the election: AI vs quantum. AI could solve some of the most interesting scientific problems before big quantum computers become a reality The Chat Every week Ill talk to one of MIT Technology Reviews reporters or editors to find out more about what theyve been working on. This week, I chatted with Melissa Heikkil about her story on how ChatGPT search paves the way for AI agents. Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean? Melissa: Microsoft tried to chip away at Googles search monopoly with Bing, and that didnt really work. Its unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPTs usefulness is limited by the fact that it cant access the internet and doesnt have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants. Mat: What will agents be able to do? Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would book your flights, reschedule your meetings, help with research, you name it. But I wouldnt get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also cant interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So were still a long way away from the vision becoming reality! I wrote anexplainer on agentsa little while ago with more details. Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks? Melissa: Its really hard to say. We are so used to using online search, and its surprisingly hard to change peoples behaviors. Unless agents become super reliable and powerful, I dont think search is going to go away. Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US? Melissa: LOL The Recommendation Im just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. Im not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If youre in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Heres a great example.)
    0 Commentarios 0 Acciones 83 Views