-
- EXPLORE
-
-
-
-
Obsessed with covering transformative technology.
Jüngste Beiträge
-
VENTUREBEAT.COMSalesforce launches Agentforce Testing Center to put agents through pacesJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreThe next phase of agentic AI may just be evaluation and monitoring, as enterprises want to make the agents theyre beginning to deploy more observable.While AI agent benchmarks can be misleading, theres a lot of value in seeing if the agent is working the way they want to. To this end, companies are beginning to offer platforms where customers can sandbox AI agents or evaluate their performance.Salesforce released its agent evaluation platform, Agentforce Testing Center, in a limited pilot Wednesday. General availability is expected in December. Testing Center lets enterprises observe and prototype AI agents to ensure they access the workflows and data they need.Testing Centers new capabilities include AI-generated tests for Agentforce, Sandboxes for Agentforce and Data Cloud and monitoring and observability for Agentforce.AI-generated tests allow companies to use AI models to generate hundreds of synthetic interactions to test if agents end up in how often they answer the way companies want. As the name suggests, sandboxes offer an isolated environment to test agents while mirroring a companys data to reflect better how the agent will work for them. Monitoring and observability let enterprises bring an audit trail to the sandbox when the agents go into production.Patrick Stokes, executive vice president of product and industries marketing at Salesforce, told VentureBeat that the Testing Center is part of a new class of agents the company calls Agent Lifecycle Management.We are positioning what we think will be a big new subcategory of agents, Stokes said. When we say lifecycle, we mean the whole thing from genesis to development all the way through deployment, and then iterations of your deployment as you go forward.Stokes said that right now, the Testing Center doesnt have workflow-specific insights where developers can see the specific choices in API, data or model the agents used. However, Salesforce collects that kind of data on its Einstein Trust Layer.What were doing is building developer tools to expose that metadata to our customers so that they can actually use it to better build their agents, Stokes said.Salesforce is hanging its hat on AI agents, focusing a lot of its energy on its agentic offering Agentforce. Salesforce customers can use preset agents or build customized agents on Agentforce to connect to their instances.Evaluating agentsAI agents touch many points in an organization, and since good agentic ecosystems aim to automate a big chunk of workflows, making sure they work well becomes essential.If an agent decides to tap the wrong API, it could spell disaster for a business. AI agents are stochastic in nature, like the models that power them, and consider each potential probability before coming up with an outcome. Stokes said Salesforce tests agents by barraging the agent with versions of the same utterances or questions. Its responses are scored as pass or fail, allowing the agent to learn and evolve within a safe environment that human developers can control.Platforms that help enterprises evaluate AI agents are fast becoming a new type of product offering. In June, customer experience AI company Sierra launched an AI agent benchmark called TAU-bench to look at the performance of conversational agents. Automation company UiPath released its Agent Builder platform in October which also offered a means to evaluate agent performance before full deployment.Testing AI applications is nothing new. Other than benchmarking model performances, many AI model repositories like AWS Bedrock and Microsoft Azure already let customers test out foundation models in a controlled environment to see which one works best for their use cases.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 8 AnsichtenPlease log in to like, share and comment!
-
VENTUREBEAT.COMCo-dev studio Blind Squirrel reveals in-house original IP CosmoronsBlind Squirrel Gamesm, previously best known for its co-development work, has revealed its new Cosmorons, an original IP.Read More0 Kommentare 0 Anteile 7 Ansichten
-
VENTUREBEAT.COMDeepSeeks first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performanceJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreDeepSeek, an AI offshoot of Chinese quantitative hedge fund High-Flyer Capital Management focused on releasing high-performance open-source tech, has unveiled the R1-Lite-Preview, its latest reasoning-focused large language model (LLM), available for now exclusively through DeepSeek Chat, its web-based AI chatbot.Known for its innovative contributions to the open-source AI ecosystem, DeepSeeks new release aims to bring high-level reasoning capabilities to the public while maintaining its commitment to accessible and transparent AI.And the R1-Lite-Preview, despite only being available through the chat application for now, is already turning heads by offering performance nearing and in some cases exceeding OpenAIs vaunted o1-preview model.Like that model released in Sept. 2024, DeepSeek-R1-Lite-Preview exhibits chain-of-thought reasoning, showing the user the different chains or trains of thought it goes down to respond to their queries and inputs, documenting the process by explaining what it is doing and why.While some of the chains/trains of thoughts may appear nonsensical or even erroneous to humans, DeepSeek-R1-Lite-Preview appears on the whole to be strikingly accurate, even answering trick questions that have tripped up other, older, yet powerful AI models such as GPT-4o and Claudes Anthropic family, including how many letter Rs are in the word Strawberry? and which is larger, 9.11 or 9.9? See screenshots below of my tests of these prompts on DeepSeek Chat:A new approach to AI reasoningDeepSeek-R1-Lite-Preview is designed to excel in tasks requiring logical inference, mathematical reasoning, and real-time problem-solving. According to DeepSeek, the model exceeds OpenAI o1-preview-level performance on established benchmarks such as AIME (American Invitational Mathematics Examination) and MATH. DeepSeek-R1-Lite-Preview benchmark results posted on X.Its reasoning capabilities are enhanced by its transparent thought process, allowing users to follow along as the model tackles complex challenges step by step.DeepSeek has also published scaling data, showcasing steady accuracy improvements when the model is given more time or thought tokens to solve problems. Performance graphs highlight its proficiency in achieving higher scores on benchmarks such as AIME as thought depth increases.Benchmarks and Real-World ApplicationsDeepSeek-R1-Lite-Preview has performed competitively on key benchmarks. The companys published results highlight its ability to handle a wide range of tasks, from complex mathematics to logic-based scenarios, earning performance scores that rival top-tier models in reasoning benchmarks like GPQA and Codeforces.The transparency of its reasoning process further sets it apart. Users can observe the models logical steps in real time, adding an element of accountability and trust that many proprietary AI systems lack.However, DeepSeek has not yet released the full code for independent third-party analysis or benchmarking, nor has it yet made DeepSeek-R1-Lite-Preview available through an API that would allow the same kind of independent tests.In addition, the company has not yet published a blog post nor a technical paper explaining how DeepSeek-R1-Lite-Preview was trained or architected, leaving many question marks about its underlying origins.Accessibility and Open-Source PlansThe R1-Lite-Preview is now accessible through DeepSeek Chat at chat.deepseek.com. While free for public use, the models advanced Deep Think mode has a daily limit of 50 messages, offering ample opportunity for users to experience its capabilities.Looking ahead, DeepSeek plans to release open-source versions of its R1 series models and related APIs, according to the companys posts on X.This move aligns with the companys history of supporting the open-source AI community. Its previous release, DeepSeek-V2.5, earned praise for combining general language processing and advanced coding capabilities, making it one of the most powerful open-source AI models at the time.Building on a LegacyDeepSeek is continuing its tradition of pushing boundaries in open-source AI. Earlier models like DeepSeek-V2.5 and DeepSeek Coder demonstrated impressive capabilities across language and coding tasks, with benchmarks placing it as a leader in the field. The release of R1-Lite-Preview adds a new dimension, focusing on transparent reasoning and scalability.As businesses and researchers explore applications for reasoning-intensive AI, DeepSeeks commitment to openness ensures that its models remain a vital resource for development and innovation. By combining high performance, transparent operations, and open-source accessibility, DeepSeek is not just advancing AI but also reshaping how it is shared and used.The R1-Lite-Preview is available now for public testing. Open-source models and APIs are expected to follow, further solidifying DeepSeeks position as a leader in accessible, advanced AI technologies.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 26 Ansichten
-
VENTUREBEAT.COMOpenScholar: The open-source A.I. thats outperforming GPT-4o in scientific researchJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreScientists are drowning in data. With millions of research papers published every year, even the most dedicated experts struggle to stay updated on the latest findings in their fields.A new artificial intelligence system, called OpenScholar, is promising to rewrite the rules for how researchers access, evaluate, and synthesize scientific literature. Built by the Allen Institute for AI (Ai2) and the University of Washington, OpenScholar combines cutting-edge retrieval systems with a fine-tuned language model to deliver citation-backed, comprehensive answers to complex research questions.Scientific progress depends on researchers ability to synthesize the growing body of literature, the OpenScholar researchers wrote in their paper. But that ability is increasingly constrained by the sheer volume of information. OpenScholar, they argue, offers a path forwardone that not only helps researchers navigate the deluge of papers but also challenges the dominance of proprietary AI systems like OpenAIs GPT-4o.How OpenScholars AI brain processes 45 million research papers in secondsAt OpenScholars core is a retrieval-augmented language model that taps into a datastore of more than 45 million open-access academic papers. When a researcher asks a question, OpenScholar doesnt merely generate a response from pre-trained knowledge, as models like GPT-4o often do. Instead, it actively retrieves relevant papers, synthesizes their findings, and generates an answer grounded in those sources.This ability to stay grounded in real literature is a major differentiator. In tests using a new benchmark called ScholarQABench, designed specifically to evaluate AI systems on open-ended scientific questions, OpenScholar excelled. The system demonstrated superior performance on factuality and citation accuracy, even outperforming much larger proprietary models like GPT-4o.One particularly damning finding involved GPT-4os tendency to generate fabricated citationshallucinations, in AI parlance. When tasked with answering biomedical research questions, GPT-4o cited nonexistent papers in more than 90% of cases. OpenScholar, by contrast, remained firmly anchored in verifiable sources.The grounding in real, retrieved papers is fundamental. The system uses what the researchers describe as their self-feedback inference loop and iteratively refines its outputs through natural language feedback, which improves quality and adaptively incorporates supplementary information.The implications for researchers, policy-makers, and business leaders are significant. OpenScholar could become an essential tool for accelerating scientific discovery, enabling experts to synthesize knowledge faster and with greater confidence.How OpenScholar works: The system begins by searching 45 million research papers (left), uses AI to retrieve and rank relevant passages, generates an initial response, and then refines it through an iterative feedback loop before verifying citations. This process allows OpenScholar to provide accurate, citation-backed answers to complex scientific questions. | Source: Allen Institute for AI and University of WashingtonInside the David vs. Goliath battle: Can open source AI compete with Big Tech?OpenScholars debut comes at a time when the AI ecosystem is increasingly dominated by closed, proprietary systems. Models like OpenAIs GPT-4o and Anthropics Claude offer impressive capabilities, but they are expensive, opaque, and inaccessible to many researchers. OpenScholar flips this model on its head by being fully open-source.The OpenScholar team has released not only the code for the language model but also the entire retrieval pipeline, a specialized 8-billion-parameter model fine-tuned for scientific tasks, and a datastore of scientific papers. To our knowledge, this is the first open release of a complete pipeline for a scientific assistant LMfrom data to training recipes to model checkpoints, the researchers wrote in their blog post announcing the system.This openness is not just a philosophical stance; its also a practical advantage. OpenScholars smaller size and streamlined architecture make it far more cost-efficient than proprietary systems. For example, the researchers estimate that OpenScholar-8B is 100 times cheaper to operate than PaperQA2, a concurrent system built on GPT-4o.This cost-efficiency could democratize access to powerful AI tools for smaller institutions, underfunded labs, and researchers in developing countries. Still, OpenScholar is not without limitations. Its datastore is restricted to open-access papers, leaving out paywalled research that dominates some fields. This constraint, while legally necessary, means the system might miss critical findings in areas like medicine or engineering. The researchers acknowledge this gap and hope future iterations can responsibly incorporate closed-access content.How OpenScholar performs: Expert evaluations show OpenScholar (OS-GPT4o and OS-8B) competing favorably with both human experts and GPT-4o across four key metrics: organization, coverage, relevance and usefulness. Notably, both OpenScholar versions were rated as more useful than human-written responses. | Source: Allen Institute for AI and University of WashingtonThe new scientific method: When AI becomes your research partnerThe OpenScholar project raises important questions about the role of AI in science. While the systems ability to synthesize literature is impressive, it is not infallible. In expert evaluations, OpenScholars answers were preferred over human-written responses 70% of the time, but the remaining 30% highlighted areas where the model fell shortsuch as failing to cite foundational papers or selecting less representative studies.These limitations underscore a broader truth: AI tools like OpenScholar are meant to augment, not replace, human expertise. The system is designed to assist researchers by handling the time-consuming task of literature synthesis, allowing them to focus on interpretation and advancing knowledge.Critics may point out that OpenScholars reliance on open-access papers limits its immediate utility in high-stakes fields like pharmaceuticals, where much of the research is locked behind paywalls. Others argue that the systems performance, while strong, still depends heavily on the quality of the retrieved data. If the retrieval step fails, the entire pipeline risks producing suboptimal results.But even with its limitations, OpenScholar represents a watershed moment in scientific computing. While earlier AI models impressed with their ability to engage in conversation, OpenScholar demonstrates something more fundamental: the capacity to process, understand, and synthesize scientific literature with near-human accuracy.The numbers tell a compelling story. OpenScholars 8-billion-parameter model outperforms GPT-4o while being orders of magnitude smaller. It matches human experts in citation accuracy where other AIs fail 90% of the time. And perhaps most tellingly, experts prefer its answers to those written by their peers.These achievements suggest were entering a new era of AI-assisted research, where the bottleneck in scientific progress may no longer be our ability to process existing knowledge, but rather our capacity to ask the right questions.The researchers have released everythingcode, models, data, and toolsbetting that openness will accelerate progress more than keeping their breakthroughs behind closed doors.In doing so, theyve answered one of the most pressing questions in AI development: Can open-source solutions compete with Big Techs black boxes?The answer, it seems, is hiding in plain sight among 45 million papers.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 30 Ansichten
-
VENTUREBEAT.COMSnowflake beats Databricks to integrating Claude 3.5 directlyThe company has partnered with Anthropic to bring Claude 3.5 family of models to Cortex AI, the fully-managed service for gen AI developmentRead More0 Kommentare 0 Anteile 8 Ansichten
-
VENTUREBEAT.COMAnthropics Computer Use mode shows strengths and limitations in new studyClaude can perform impressively complex tasks, but it will also make stupid mistakes from time to time.Read More0 Kommentare 0 Anteile 7 Ansichten
-
VENTUREBEAT.COMGoodbye cloud, Hello phone: Adobes SlimLM brings AI to mobile devicesAdobe's SlimLM, a breakthrough AI system, brings advanced document processing directly to smartphones without the need for cloud computing, offering enhanced privacy and reduced costs for businesses.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMThe graph database arms race: How Microsoft and rivals are revolutionizing cybersecurityJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreMultidomain attacks are on the verge of becoming a digital epidemic as nation-states and well-funded cybercrime attack groups look to exploit wide gaps in digital estates defenses. Enterprises are having to contend with widening and often unknown gaps between enterprise assets, apps, systems, data, identities and endpoints.The fast-rising pace of attacks is driving a graph database arms race across leading cybersecurity providers. Microsofts Security Exposure Management Platform (MSEM) at Ignite 2024 reflects how quickly the arms race is maturing and why its containment requires more advanced platforms.In addition to Microsofts MSEM, other key players in the graph database arms race for combating multidomain threats include CrowdStrike with its Threat Graph, Ciscos SecureX, SentinelOnes Purple AI, Palo Alto Networks Cortex XDR and Trend Micros Vision One, alongside providers like Neo4j, TigerGraph and Amazon Neptune who supply foundational graph database technology.Three years ago, we were seeing 567 password-related attacks per second. Today, that number has skyrocketed to 7,000 per second. This represents a massive escalation in the scale, speed and sophistication of modern cyber threats, underscoring the urgency for proactive and unified security strategies, Vasu Sakkal, Microsofts corporate vice president of security, compliance, identity, management and privacy, told VentureBeat during a recent interview.Microsoft goes all-in on their security vision at Ignite 2024With every organization experiencing more multidomain intrusion attempts and suffering from undiscovered breaches, Microsoft is doubling down on security, pivoting its strategy to graph-based defense in MSEM. Sakkal told VentureBeat, The sophistication, scale, and speed of modern attacks require a generational shift in security. Graph databases and generative AI offer defenders the tools to unify fragmented insights into actionable intelligence.Cristian Rodriguez, CrowdStrikes Americas Field CTO, echoed the importance of graph technology in a recent interview with VentureBeat. Graph databases allow us to map adversary behavior across domains, identifying the subtle connections and patterns attackers exploit. By visualizing these relationships, defenders gain the contextual insight needed to anticipate and disrupt complex, cross-domain attack strategies, Rodriguez said.Key announcements from Ignite 2024 include:Microsoft Security Exposure Management Platform (MSEM). At the core of Microsofts strategy, MSEM leverages graph technology to dynamically map relationships across digital estates, including devices, identities and data. MSEM support for graph databases enables security teams to identify high-risk attack paths and prioritize proactive remediation efforts.Zero Day Quest. Microsoft is offering $4M in rewards to uncover vulnerabilities in AI and cloud platforms. This initiative aims to bring together researchers, engineers and AI red teams to address critical risks preemptively.Windows Resiliency Initiative. Focusing on zero trust principles, this initiative looks to enhance system reliability and recovery by securing credentials, implementing Zero Trust DNS protocols and fortifying Windows 11 against emerging threats.Security Copilot Enhancements. Microsoft claims that Security Copilots generative AI capabilities enhance SOC operations by automating threat detection, streamlining incident triage and reducing mean time to resolution by 30%. Integrated with Entra, Intune, Purview and Defender, these updates provide actionable insights, helping security teams address threats with greater efficiency and accuracy.Updates in Microsoft Purview. Purviews advanced Data Security Posture Management (DSPM) tools tackle generative AI risks by discovering, protecting and governing sensitive data in real-time. Features include detecting prompt injections, mitigating data misuse and preventing oversharing in AI apps. The tool also strengthens compliance with AI governance standards, aligning enterprise security with evolving regulations.Why now? The role of graph databases in cybersecurityJohn Lambert, corporate vice president for Microsoft Security Research, underscored the critical importance of graph-based thinking in cybersecurity, explaining to VentureBeat, Defenders think in lists, cyberattackers think in graphs. As long as this is true, attackers win.He added that Microsofts approach to exposure management involves creating a comprehensive graph of the digital estate, overlaying vulnerabilities, threat intelligence and attack paths. Its about giving defenders a complete map of their environment, allowing them to prioritize the most critical risks while understanding the potential blast radius of any compromise, Lambert added.Graph databases are gathering momentum as an architectural strategy for cybersecurity platforms. They excel at visualizing and analyzing interconnected data, which is critical for identifying attack paths in real time.Key benefits of graph databases include:Relational Context: Map relationships between assets and vulnerabilities.Fast Querying: Traverse billions of nodes in milliseconds.Threat Detection: Identify high-risk attack paths, reducing false positives.Knowledge Discovery: Use graph AI for insights into interconnected risks.Behavioral Analysis: Graphs detect subtle attack patterns across domains.Scalability: Integrate new data points seamlessly into existing threat models.Multidimensional Analysis:The Gartner heat map underscores how graph databases excel in cybersecurity use cases like anomaly detection, monitoring and decision-making, positioning them as essential tools in modern defense strategies.Emerging Tech: Optimize Threat Detection With Knowledge Graph Databases, May 2024. Source: GartnerWhat makes Microsofts MSEM platform uniqueThe Microsoft Security Exposure Management Platform (MSEM) differentiates itself from other graph database-driven cybersecurity platforms through its real-time visibility and risk management, which helps security operations center teams stay on top of risks, threats, incidents and breaches.Sakkal told VentureBeat, MSEM bridges the gap between detection and action, empowering defenders to anticipate and mitigate threats effectively. The platform exemplifies Microsofts vision of a unified, graph-driven security approach, offering organizations the tools to stay ahead of modern threats with precision and speed.Built on graph-powered insights, MSEM integrates three core capabilities needed to battle back against multi-domain attacks and fragmented security data. They include:Attack Surface Management. MSEM is designed to provide a dynamic view of an organizations digital estate, enabling the identification of assets, interdependencies and vulnerabilities. Features like automated discovery of IoT/OT devices and unprotected endpoints ensure visibility while prioritizing high-risk areas. The device inventory dashboard categorizes assets by criticality, helping security teams focus on the most urgent threats with precision.Source: MicrosoftAttack Path Analysis. MSEM uses graph databases to map attack paths from an adversarys perspective, pinpointing critical routes they might exploit. Enhanced with AI-driven graph modeling, it identifies high-risk pathways across hybrid environments, including on-premises, cloud and IoT systems.Unified Exposure Insights. Microsoft also designed MSEM to translate technical data into actionable intelligence for both security professionals and business leader personas. It supports ransomware protection, SaaS security, and IoT risk management, ensuring targeted, insightful data is provided to security analysts.Microsoft also announced the following MSEM enhancements at Ignite 2024:Third-Party Integrations: MSEM connects with Rapid7, Tenable and Qualys, broadening its visibility and making it a powerful tool for hybrid environments.AI-Powered Graph Modeling: Detects hidden vulnerabilities and performs advanced threat path analysis for proactive risk reduction.Historical Trends and Metrics: This tool tracks shifts in exposure over time, helping teams adapt to evolving threats confidently.Graph databases growing role in cybersecurityGraph databases have proven invaluable in tracking and defeating multi-domain attacks. They excel at visualizing and analyzing interconnected data in real time, enabling faster and more accurate threat detection, attack path analysis and risk prioritization. Its no surprise that graph database technology dominates the roadmaps of leading cybersecurity platform providers.Ciscos SecureX Threat Response is one example. The Cisco platform extends the utility of graph databases into network-centric environments, connecting data across endpoints, IoT devices and hybrid networks. Key strengths include an integrated incident response thats integrated across the Cisco suite of apps and tools and network-centric visibility.What we have to do is make sure that we use AI natively for defenses because you cannot go out and fight those AI weaponization attacks from adversaries at a human scale. You have to do it at machine scale, Jeetu Patel, Ciscos executive vice president and CPO, told VentureBeat in an interview earlier this year.CrowdStrikes Threat Graph was introduced at their annual customer event, Fal.Con in 2022 and is often cited as an example of the power of graph databases in endpoint security. Processing over 2.5 trillion daily events, Threat Graph excels in detecting weak signals and mapping adversary behavior. Rodriguez emphasized to VentureBeat, Our graph capabilities ensure precision by focusing on endpoint telemetry, providing defenders with actionable insights faster than ever. CrowdStrikes key differentiators include endpoint precision in tracking lateral movements and identifying anomalous behaviors. Threat Graph also supports behavioral analysis used on AI to uncover adversary techniques across workloads.Palo Alto Networks (Cortex XDR), SentinelOne (Singularity) and Trend Micro are among the notable players leveraging graph databases to enhance their threat detection and real-time anomaly analysis capabilities. Gartner predicted in the recent research note Emerging Tech: Optimize Threat Detection With Knowledge Graph Databases that their widespread adoption will continue due to their ability to support AI-driven insights and reduce noise in security operations.Graph databases will transform enterprise defenseMicrosofts Lambert encapsulated the industrys trajectory by stating, May the best attack graph win. Graph databases are transforming how defenders think about interconnected risks, underscoring their pivotal role in modern cybersecurity strategies.Multi-domain attacks target the weaknesses between and within complex digital estates. Finding gaps in identity management is an area nation-state attackers concentrate on and mine data to access the core enterprise systems of a company. Microsoft joins Cisco, CrowdStrike, Palo Alto Networks, SentinelOne and Trend Micro, enabling and continuing to improve graph database technology to identify and act on threats before a breach happens.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 8 Ansichten
-
VENTUREBEAT.COMQuicksave announces QSApp to make WebGL more accessible through no-code editorsQuicksave Interactive wants to make the web interactive with WebGL technology by launching QSApp, a tool to make the web more accessible.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMXsolla establishes APAC HQ in Busan, launches dev center for local talentXsolla today pledged to open its new APAC HQ in Busan, and collaborate with the city to build a talent development center for game creators.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMOrchestrator agents: Integration, human interaction, and enterprise knowledge at the coreBringing in more AI agents to workflows means having a strong orchestrator agent to manage it all for enterprises.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMNvidia accelerates Google quantum AI design with quantum physics simulationNvidia it is working with Google Quantum AI to accelerate the design of its next-generation quantum computing devices using simulations powered by Nvidia.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMSuperScale puts its latest fundraise toward gaming analytics SuperPlatformSuperScale has raised $1.2 million in its latest funding round to launch its data analytics SuperPlatform for gaming businesses.Read More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMFinal Fantasy VII Rebirth and Astro Bot lead The Game Awards 2024 nomineesThe Game Awards has revealed its 2024 list of nominees, with Astro Bot, Metaphor: ReFantazio and Final Fantasy VII Rebirth getting multiple nods.Read More0 Kommentare 0 Anteile 8 Ansichten
-
VENTUREBEAT.COMAnyChat brings together ChatGPT, Google Gemini, and more for ultimate AI flexibilityJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreA new tool called AnyChat is giving developers unprecedented flexibility by uniting a wide range of leading large language models (LLMs) under a single interface.Developed by Ahsen Khaliq (also known as AK), a prominent figure in the AI community and machine learning growth lead at Gradio, the platform allows users to switch seamlessly between models like ChatGPT, Googles Gemini, Perplexity, Claude, Metas LLaMA, and Grok, all without being locked into a single provider. AnyChat promises to change how developers and enterprises interact with artificial intelligence by offering a one-stop solution for accessing multiple AI systems.At its core, AnyChat is designed to make it easier for developers to experiment with and deploy different LLMs without the restrictions of traditional platforms. We wanted to build something that gave users total control over which models they can use, said Khaliq. Instead of being tied to a single provider, AnyChat gives you the freedom to integrate models from various sources, whether its a proprietary model like Googles Gemini or an open-source option from Hugging Face.Khaliqs brainchild is built on Gradio, a popular framework for creating customizable AI applications. The platform features a tab-based interface that allows users to easily switch between models, along with dropdown menus for selecting specific versions of each AI. AnyChat also supports token authentication, ensuring secure access to APIs for enterprise users. For models requiring paid API keyssuch as Geminis search capabilitiesdevelopers can input their own credentials, while others, like basic Gemini models, are available without an API key thanks to a free key provided by Khaliq.How AnyChat fills a critical gap in AI developmentThe launch of AnyChat comes at a critical time for the AI industry. As companies increasingly integrate AI into their operations, many have found themselves constrained by the limitations of individual platforms. Most developers currently have to choose between committing to a single model, such as OpenAIs GPT-4o, or spending significant time and resources integrating multiple models separately. AnyChat addresses this pain point by offering a unified interface that can handle both proprietary and open-source models, giving developers the flexibility to choose the best tool for the job at any given moment.This flexibility has already attracted interest from the developer community. In a recent update, a contributor added support for DeepSeek V2.5, a specialized model made available through the Hyperbolic API, demonstrating how easily new models can be integrated into the platform. The real power of AnyChat lies in its ability to grow, said Khaliq. The community can extend it with new models, making the potential of this platform far greater than any one model alone.What makes AnyChat useful for teams and companiesFor developers, AnyChat offers a streamlined solution to what has historically been a complicated and time-consuming process. Rather than building separate infrastructure for each model or being forced to use a single AI provider, users can deploy multiple models within the same app. This is particularly useful for enterprises that may need different models for different tasksan organization could use ChatGPT for customer support, Gemini for research and search capabilities, and Metas LLaMA for vision-based tasks, all within the same interface.The platform also supports real-time search and multimodal capabilities, making it a versatile tool for more complex use cases. For example, Perplexity models integrated into AnyChat offer real-time search functionality, a feature that many enterprises find valuable for keeping up with constantly changing information. On the other hand, models like LLaMA 3.2 provide vision support, expanding the platforms capabilities beyond text-based AI.Khaliq noted that one of the key advantages of AnyChat is its open-source support. We wanted to make sure that developers who prefer working with open-source models have the same access as those using proprietary systems, he said. AnyChat supports a broad range of models hosted on Hugging Face, a popular platform for open-source AI implementations. This gives developers more control over their deployments and allows them to avoid costly API fees associated with proprietary models.How AnyChat handles both text and image processingOne of the most exciting aspects of AnyChat is its support for multimodal AI, or models that can process both text and images. This capability is becoming increasingly crucial as companies look for AI systems that can handle more complex tasks, from analyzing images for diagnostic purposes to generating text-based insights from visual data. Models like LLaMA 3.2, which includes vision support, are key to addressing these needs, and AnyChat makes it easy to switch between text-based and multimodal models as needed.For many enterprises, this flexibility is a huge deal. Rather than investing in separate systems for text and image analysis, they can now deploy a single platform that handles both. This can lead to significant cost savings, as well as faster development times for AI-driven projects.AnyChats growing library of AI modelsAnyChats potential extends beyond its current capabilities. Khaliq believes that the platforms open architecture will encourage more developers to contribute models, making it an even more powerful tool over time. The beauty of AnyChat is that it doesnt just stop at whats available now. Its designed to grow with the community, which means the platform will always be at the cutting edge of AI development, he told VentureBeat.The community has already embraced this vision. In a discussion on Hugging Face, developers have noted how easy it is to add new models to the platform. With support for models like DeepSeek V2.5 already being integrated, AnyChat is poised to become a hub for AI experimentation and deployment.Whats next for AnyChat and AI developmentAs the AI landscape continues to evolve, tools like AnyChat will play a crucial role in shaping how developers and enterprises interact with AI technology. By offering a unified interface for multiple models and allowing for seamless integration of both proprietary and open-source systems, AnyChat is breaking down the barriers that have traditionally siloed different AI platforms.For developers, it offers the freedom to choose the best tool for the job without the hassle of managing multiple systems. For enterprises, it provides a cost-effective, scalable solution that can grow alongside their AI needs. As more models are added and the platform continues to evolve, AnyChat could very well become the go-to tool for anyone looking to leverage the full power of large language models in their applications.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMRoblox updates its safety systems and parental controls under outside pressureUnder outside pressure, Roblox announced updates to its safety systems and parental controls today to protect children. In a blog post, Matt Kaufman, chief safety officer at Roblox said the updates will better protect the platforms youngest users and provide easy-to-use tools to give parents and caregivers more control and clarity over whatRead More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMPlay Ventures raises $140M second fund to invest in games and consumer startupsPlay Ventures, a global venture capital firm specializing in early-stage gaming, has raised $140 million for its third gaming fund.This fund represents Singapore-based Play Ventures largest fund to date and brings its total assets under management to $450 million. Coming at this time, the new fund is important for the gaming industry, which has had a tough 2.5 years with 33,000 layoffs during that time.The Play Ventures team.The funds close was driven by strong support from a core group of returning investors, including university endowments, strategic partners in the gaming sector, and prominent global family offices as well as new investors backing the firms thesis.Fund III will build upon Play Ventures successful strategy of investing in early-stage companies across the gaming ecosystem, with a focus on mobile free-to-play, mobile consumer, gaming infrastructure and platforms, AI-enhanced gaming tools, and next-generation distribution channels. Since the funds initial close in June 2023, the fund has already made eight investments, including investing in experienced founders with prior exits, underscoring the strength of Play Ventures approach and deal sourcing.Play Ventures leadership team.Software may have eaten the world, but mobile has swallowed our time whole. People now live on their phonesspending hours a day engaging with social media, apps, and, most notably, games, said Henric Suuronen, founding partner at Play Ventures, in a statement. Mobile gaming is one of the most dynamic arenas of our time, presenting massive, untapped potential. With Fund III, were investing in a new wave of billion-dollar games and interactive experiences, supercharged by the transformative power of AI.Fund III will also have an expanded focus to playable appsconsumer applications that apply the best of the free-to-play gaming playbook to create captivating, interactive user experiences across multiple consumer verticals.Integrating gaming mechanics into everyday apps is just the first step. Theres a ton of user engagement and value that can be unlocked by taking learnings from the entire f2p gaming playbook that has been perfected over decades, including meta design, live ops, economy design, and monetization, said Phylicia Koh, Partner at Play Ventures. Our playable apps investments in Arya, Ahead, Benjamin and Bible Chat are testaments of how this approach can drive significant growth and reshape user experiences.Most recently, Fund III invested in AI-startup Beyond, founded by Huuuge Games founder Anton Gauffin, developing their first consumer product Decor Society.I asked Harri Manninen, cofounder of Play Ventures, about the importance of raising this fund in the context of so many layoffs in gaming.The gaming industry has always been extremely fast moving and dynamic, Manninen said. While the recent period of slower growth and economic pressure has certainly impacted gaming companies, the gaming ecosystem continues to evolve and present new opportunities in areas like user-generated content (UGC) platforms, AI-powered tools and development, emerging global markets and also playable consumer apps. These can be seen as an entirely new class of growth opportunities.Manninen added, We believe that the best founders see these periods of uncertainty as an opportunity to build the next generation of great game companies. Many of the biggest gaming companies of today were founded during times of market downturn and gloom. With Fund III, we are committed to supporting these brave founders who are creating new business, regardless of market sentiment. Its an exciting time to invest in new gaming startups and technologies and help drive the industry forward.And he said, My hope is that the new gaming companies of tomorrow will grow into big successes that will be able to hire many of the top talent that have unfortunately lost their jobs in the gaming industry recently. With new growth companies theres always demand to hire new people and top talent.Play Ventures founders Henric Suuronen (left) and Harri Manninen.Play Ventures anticipates deploying Fund III across 20 to 25 companies globally, focusing on early-stage investments from pre-seed to Series A, while reserving capital to support the highest-performing portfolio companies as they grow.With Fund III, Play Ventures is excited to partner with founders who are redefining the gaming landscape and building the next generation of interactive experiences. Play Ventures was founded in 2018 and it has offices in Singapore and Helsinki. For the first fund, Play Ventures raised $30 million in 2018 and for the second it raised $135 million in 2021.The team includes Suuronen, Harri Manninen, and general partners Kenrick Drijkoningen, Phylicia Koh, and Anton Backman. VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COM3 leadership lessons we can learn from ethical hackersHere's how business leaders can use a hackers problem-solving approach to to improve their own leadership skills.Read More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMOur brains are vector databases heres why thats helpful when using AIJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreIn 2014, a breakthrough at Google transformed how machines understand language: The self-attention model. This innovation allowed AI to grasp context and meaning in human communication by treating words as mathematical vectors precise numerical representations that capture relationships between ideas. Today, this vector-based approach has evolved into sophisticated vector databases, systems that mirror how our own brains process and retrieve information. This convergence of human cognition and AI technology isnt just changing how machines work its redefining how we need to communicate with them.How our brains already think in vectorsThink of vectors as GPS coordinates for ideas. Just as GPS uses numbers to locate places, vector databases use mathematical coordinates to map concepts, meanings and relationships. When you search a vector database, youre not just looking for exact matches youre finding patterns and relationships, just as your brain does when recalling a memory. Remember searching for your lost car keys? Your brain didnt methodically scan every room; it quickly accessed relevant memories based on context and similarity. This is exactly how vector databases work.The three core skills, evolvedTo thrive in this AI-augmented future, we need to evolve what I call the three core skills: reading, writing and querying. While these may sound familiar, their application in AI communication requires a fundamental shift in how we use them. Reading becomes about understanding both human and machine context. Writing transforms into precise, structured communication that machines can process. And querying perhaps the most crucial new skill involves learning to navigate vast networks of vector-based information in ways that combine human intuition with machine efficiency.Mastering vector communicationConsider an accountant facing a complex financial discrepancy. Traditionally, theyd rely on their experience and manual searches through documentation. In our AI-augmented future, theyll use vector-based systems that work like an extension of their professional intuition. As they describe the issue, the AI doesnt just search for keywords it understands the problems context, pulling from a vast network of interconnected financial concepts, regulations and past cases. The key is learning to communicate with these systems in a way that leverages both human expertise and AIs pattern-recognition capabilities.But mastering these evolved skills isnt about learning new software or memorizing prompt templates. Its about understanding how information connects and relates thinking in vectors, just like our brains naturally do. When you describe a concept to AI, youre not just sharing words; youre helping it navigate a vast map of meaning. The better you understand how these connections work, the more effectively you can guide AI systems to the insights you need.Taking action: Developing your core skills for AIReady to prepare yourself for the AI-augmented future? Here are concrete steps you can take to develop each of the three core skills:Strengthen your readingReading in the AI age requires more than just comprehension it demands the ability to quickly process and synthesize complex information. To improve:Study two new words daily from technical documentation or AI research papers. Write them down and practice using them in different contexts. This builds the vocabulary needed to communicate effectively with AI systems.Read at least two to three pages of AI-related content daily. Focus on technical blogs, research summaries or industry publications. The goal isnt just consumption but developing the ability to extract patterns and relationships from technical content.Practice reading documentation from major AI platforms. Understanding how different AI systems are described and explained will help you better grasp their capabilities and limitations.Evolve your writingWriting for AI requires precision and structure. Your goal is to communicate in a way that machines can accurately interpret.Study grammar and syntax intentionally. AI language models are built on patterns, so understanding how to structure your writing will help you craft more effective prompts.Practice writing prompts daily. Create three new ones each day, then analyze and refine them. Pay attention to how slight changes in structure and word choice affect AI responses.Learn to write with query elements in mind. Incorporate database-like thinking into your writing by being specific about what information youre requesting and how you want it organized.Master queryingQuerying is perhaps the most crucial new skill for AI interaction. Its about learning to ask questions in ways that leverage AIs capabilities:Practice writing search queries for traditional search engines. Start with simple searches, then gradually make them more complex and specific. This builds the foundation for AI prompting.Study basic SQL concepts and database query structures. Understanding how databases organize and retrieve information will help you think more systematically about information retrieval.Experiment with different query formats in AI tools. Test how various phrasings and structures affect your results. Document what works best for different types of requests.The future of human-AI collaborationThe parallels between human memory and vector databases go deeper than simple retrieval. Both excel at compression, reducing complex information into manageable patterns. Both organize information hierarchically, from specific instances to general concepts. And both excel at finding similarities and patterns that might not be obvious at first glance.This isnt just about professional efficiency its about preparing for a fundamental shift in how we interact with information and technology. Just as literacy transformed human society, these evolved communication skills will be essential for full participation in the AI-augmented economy. But unlike previous technological revolutions that sometimes replaced human capabilities, this one is about enhancement. Vector databases and AI systems, no matter how advanced, lack the uniquely human qualities of creativity, intuition, and emotional intelligence.The future belongs to those who understand how to think and communicate in vectors not to replace human thinking, but to enhance it. Just as vector databases combine precise mathematical representation with intuitive pattern matching, successful professionals will blend human creativity with AIs analytical power. This isnt about competing with AI or simply learning new tools its about evolving our fundamental communication skills to work in harmony with these new cognitive technologies.As we enter this new era of human-AI collaboration, our goal isnt to out-compute AI but to complement it. The transformation begins not with mastering new software, but with understanding how to translate human insight into the language of vectors and patterns that AI systems understand. By embracing this evolution in how we communicate and process information, we can create a future where technology enhances rather than replaces human capabilities, leading to unprecedented levels of creativity, problem-solving and innovation.Khufere Qhamata is a research analyst, author of Humanless Work: How AI Will Transform, Destroy And Change Life Forever and the founder of Qatafa AI. DataDecisionMakersWelcome to the VentureBeat community!DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.You might even considercontributing an articleof your own!Read More From DataDecisionMakers0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMFrom traditional workspaces to sanctuaries: how Mo Hamzian is shaping the culture of remote workCONTRIBUTOR CONTENT: Nearly 28% of the global workforce works remotely, and 38% of the global workforce are freelancers who dont commute to traditional office spaces: remote work is here to stay. While it offers a number of advantages both for employers and employees, it isnt without its drawbacks, including a plethora of distractions and an incrRead More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMGoogle Gemini unexpectedly surges to No. 1, over OpenAI, but benchmarks dont tell the whole storyGoogle's Gemini-Exp-1114 AI model tops key benchmarks, but experts warn traditional testing methods may no longer accurately measure true AI capabilities or safety, raising concerns about the industry's current evaluation standards.Read More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMTrump revoking Biden AI EO will make industry more chaotic, experts sayA potential repeal of President Biden's AI rules could mean enterprises will have trouble navigating state-specific laws.Read More0 Kommentare 0 Anteile 12 Ansichten
-
VENTUREBEAT.COMWhat Oktas failures say about the future of identity security in 20252025 needs to be the year identity providers go all in on improving every aspect of software quality and security, including red teaming.Read More0 Kommentare 0 Anteile 11 Ansichten
-
VENTUREBEAT.COMAI search wars heat up: Genspark adds Claude-powered financial reports on demandDistill Web gives users the ability to look up of 300,000-and-counting public companies and generate polished financial reports.Read More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMLive commerce is the new sports bar: Loupe is the preferred late-night hangout for sports fans and collectorsCONTRIBUTOR CONTENT: Live commerce and the sports collectibles industries are both booming. Theres one place that sports fans are all going after work and no, its not the nearest sports bar. At night, fans are flocking to Loupe, where they can extend the game day excitement and connect with fellow sports fans for another live experience, but oneRead More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMYou can now run the most powerful open source AI models locally on Mac M4 computers, thanks to Exo LabsTo further support adoption of local AI solutions, Exo Labs is preparing to launch a free benchmarking website next week.Read More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMEA CEO Andrew Wilson in running to be Disney CEO succeeding Bob Iger | WSJDisney is reportedly considering Electronic Arts CEO Andrew Wilson as a successor to Disney CEO Bob Iger, the Wall Street Journal said.Read More0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMHow Microsofts next-gen BitNet architecture is turbocharging LLM efficiencyJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreOne-bit large language models (LLMs) have emerged as a promising approach to making generative AI more accessible and affordable. By representing model weights with a very limited number of bits, 1-bit LLMs dramatically reduce the memory and computational resources required to run them.Microsoft Research has been pushing the boundaries of 1-bit LLMs with its BitNet architecture. In a new paper, the researchers introduce BitNet a4.8, a new technique that further improves the efficiency of 1-bit LLMs without sacrificing their performance.The rise of 1-bit LLMsTraditional LLMs use 16-bit floating-point numbers (FP16) to represent their parameters. This requires a lot of memory and compute resources, which limits the accessibility and deployment options for LLMs. One-bit LLMs address this challenge by drastically reducing the precision of model weights while matching the performance of full-precision models.Previous BitNet models used 1.58-bit values (-1, 0, 1) to represent model weights and 8-bit values for activations. This approach significantly reduced memory and I/O costs, but the computational cost of matrix multiplications remained a bottleneck, and optimizing neural networks with extremely low-bit parameters is challenging.Two techniques help to address this problem. Sparsification reduces the number of computations by pruning activations with smaller magnitudes. This is particularly useful in LLMs because activation values tend to have a long-tailed distribution, with a few very large values and many small ones.Quantization, on the other hand, uses a smaller number of bits to represent activations, reducing the computational and memory cost of processing them. However, simply lowering the precision of activations can lead to significant quantization errors and performance degradation.Furthermore, combining sparsification and quantization is challenging, and presents special problems when training 1-bit LLMs.Both quantization and sparsification introduce non-differentiable operations, making gradient computation during training particularly challenging, Furu Wei, Partner Research Manager at Microsoft Research, told VentureBeat.Gradient computation is essential for calculating errors and updating parameters when training neural networks. The researchers also had to ensure that their techniques could be implemented efficiently on existing hardware while maintaining the benefits of both sparsification and quantization.BitNet a4.8BitNet a4.8 addresses the challenges of optimizing 1-bit LLMs through what the researchers describe as hybrid quantization and sparsification. They achieved this by designing an architecture that selectively applies quantization or sparsification to different components of the model based on the specific distribution pattern of activations. The architecture uses 4-bit activations for inputs to attention and feed-forward network (FFN) layers. It uses sparsification with 8 bits for intermediate states, keeping only the top 55% of the parameters. The architecture is also optimized to take advantage of existing hardware.With BitNet b1.58, the inference bottleneck of 1-bit LLMs switches from memory/IO to computation, which is constrained by the activation bits (i.e., 8-bit in BitNet b1.58), Wei said. In BitNet a4.8, we push the activation bits to 4-bit so that we can leverage 4-bit kernels (e.g., INT4/FP4) to bring 2x speed up for LLM inference on the GPU devices. The combination of 1-bit model weights from BitNet b1.58 and 4-bit activations from BitNet a4.8 effectively addresses both memory/IO and computational constraints in LLM inference.BitNet a4.8 also uses 3-bit values to represent the key (K) and value (V) states in the attention mechanism. The KV cache is a crucial component of transformer models. It stores the representations of previous tokens in the sequence. By lowering the precision of KV cache values, BitNet a4.8 further reduces memory requirements, especially when dealing with long sequences.The promise of BitNet a4.8Experimental results show that BitNet a4.8 delivers performance comparable to its predecessor BitNet b1.58 while using less compute and memory.Compared to full-precision Llama models, BitNet a4.8 reduces memory usage by a factor of 10 and achieves 4x speedup. Compared to BitNet b1.58, it achieves a 2x speedup through 4-bit activation kernels. But the design can deliver much more.The estimated computation improvement is based on the existing hardware (GPU), Wei said. With hardware specifically optimized for 1-bit LLMs, the computation improvements can be significantly enhanced. BitNet introduces a new computation paradigm that minimizes the need for matrix multiplication, a primary focus in current hardware design optimization.The efficiency of BitNet a4.8 makes it particularly suited for deploying LLMs at the edge and on resource-constrained devices. This can have important implications for privacy and security. By enabling on-device LLMs, users can benefit from the power of these models without needing to send their data to the cloud.Wei and his team are continuing their work on 1-bit LLMs.We continue to advance our research and vision for the era of 1-bit LLMs, Wei said. While our current focus is on model architecture and software support (i.e., bitnet.cpp), we aim to explore the co-design and co-evolution of model architecture and hardware to fully unlock the potential of 1-bit LLMs.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMCall of Dutys anti-cheat can remove cheaters before they play or before they winThe Ricochet anti-cheat system for Call of Duty games can block cheaters before they play or as they play a match and before they can win.Read More0 Kommentare 0 Anteile 11 Ansichten
-
VENTUREBEAT.COMMicrosoft brings AI to the farm and factory floor, partnering with industry giantsJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreMicrosoft has launched a new suite of specialized AI models designed to address specific challenges in manufacturing, agriculture, and financial services. In collaboration with partners such as Siemens, Bayer, Rockwell Automation, and others, the tech giant is aiming to bring advanced AI technologies directly into the heart of industries that have long relied on traditional methods and tools.These purpose-built modelsnow available through Microsofts Azure AI catalogrepresent Microsofts most focused effort yet to develop AI tools tailored to the unique needs of different sectors. The companys initiative reflects a broader strategy to move beyond general-purpose AI and deliver solutions that can provide immediate operational improvements in industries like agriculture and manufacturing, which are increasingly facing pressures to innovate.Microsoftis in a unique position to deliver the industry-specific solutions organizations need through the combination of the Microsoft Cloud, our industry expertise, and our global partner ecosystem, Satish Thomas, Corporate Vice President of Business & Industry Solutions at Microsoft, said in a LinkedIn post announcing the new AI models.Through these models, he added, were addressing top industry use cases, from managing regulatory compliance of financial communications to helping frontline workers with asset troubleshooting on the factory floor ultimately, enabling organizations to adopt AI at scale across every industry and region and much more to come in future updates!Siemens and Microsoft remake industrial design with AI-powered softwareAt the center of the initiative is a partnership with Siemens to integrate AI into its NX X software, a widely used platform for industrial design. Siemens NX X copilot uses natural language processing to allow engineers to issue commands and ask questions about complex design tasks. This feature could drastically reduce the onboarding time for new users while helping seasoned engineers complete their work faster.By embedding AI into the design process, Siemens and Microsoft are addressing a critical need in manufacturing: the ability to streamline complex tasks and reduce human error. This partnership also highlights a growing trend in enterprise technology, where companies are looking for AI solutions that can improve day-to-day operations rather than experimental or futuristic applications.Smaller, faster, smarter: How Microsofts compact AI models are transforming factory operationsMicrosofts new initiative relies heavily on its Phi family of small language models (SLMs), which are designed to perform specific tasks while using less computing power than larger models. This makes them ideal for industries like manufacturing, where computing resources can be limited, and where companies often need AI that can operate efficiently on factory floors.Perhaps one of the most novel uses of AI in this initiative comes from Sight Machine, a leader in manufacturing data analytics. Sight Machines Factory Namespace Manager addresses a long-standing but often overlooked problem: the inconsistent naming conventions used to label machines, processes, and data across different factories. This lack of standardization has made it difficult for manufacturers to analyze data across multiple sites. The Factory Namespace Manager helps by automatically translating these varied naming conventions into standardized formats, allowing manufacturers to better integrate their data and make it more actionable.While this may seem like a minor technical fix, the implications are far-reaching. Standardizing data across a global manufacturing network could unlock operational efficiencies that have been difficult to achieve.Early adopters like Swire Coca-Cola USA, which plans to use this technology to streamline its production data, likely see the potential for gains in both efficiency and decision-making. In an industry where even small improvements in process management can translate into substantial cost savings, addressing this kind of foundational issue is a crucial step toward more sophisticated data-driven operations.Smart farming gets real: Bayers AI model tackles modern agriculture challengesIn agriculture, the Bayer E.L.Y. Crop Protection model is poised to become a key tool for farmers navigating the complexities of modern farming. Trained on thousands of real-world questions related to crop protection labels, the model provides farmers with insights into how best to apply pesticides and other crop treatments, factoring in everything from regulatory requirements to environmental conditions.This model comes at a crucial time for the agricultural industry, which is grappling with the effects of climate change, labor shortages, and the need to improve sustainability. By offering AI-driven recommendations, Bayers model could help farmers make more informed decisions that not only improve crop yields but also support more sustainable farming practices.Beyond the factory: Microsofts AI tools reshape cars, banking, and food productionThe initiative also extends into the automotive and financial sectors. Cerence, which develops in-car voice assistants, will use Microsofts AI models to enhance in-vehicle systems. Its CaLLM Edge model allows drivers to control various car functions, such as climate control and navigation, even in settings with limited or no cloud connectivitymaking the technology more reliable for drivers in remote areas.In finance, Saifr, a regulatory technology startup within Fidelity Investments, is introducing models aimed at helping financial institutions manage regulatory compliance more effectively. These AI tools can analyze broker-dealer communications to flag potential compliance risks in real-time, significantly speeding up the review process and reducing the risk of regulatory penalties.Rockwell Automation, meanwhile, is releasing the FT Optix Food & Beverage model, which helps factory workers troubleshoot equipment in real time. By providing recommendations directly on the factory floor, this AI tool can reduce downtime and help maintain production efficiency in a sector where operational disruptions can be costly.The release of these AI models marks a shift in how businesses can adopt and implement artificial intelligence. Rather than requiring companies to adapt to broad, one-size-fits-all AI systems, Microsofts approach allows businesses to use AI models that are custom-built to address their specific operational challenges. This addresses a major pain point for industries that have been hesitant to adopt AI due to concerns about cost, complexity, or relevance to their particular needs.The focus on practicality also reflects Microsofts understanding that many businesses are looking for AI tools that can deliver immediate, measurable results. In sectors like manufacturing and agriculture, where margins are often tight and operational disruptions can be costly, the ability to deploy AI that improves efficiency or reduces downtime is far more appealing than speculative AI projects with uncertain payoffs.By offering tools that are tailored to industry-specific needs, Microsoft is betting that businesses will prioritize tangible improvements in their operations over more experimental technologies. This strategy could accelerate AI adoption in sectors that have traditionally been slower to embrace new technologies, like manufacturing and agriculture.Inside Microsofts plan to dominate industrial AI and edge computingMicrosofts push into industry-specific AI models comes at a time of increasing competition in the cloud and AI space. Rivals like Amazon Web Services and Google Cloud are also investing heavily in AI, but Microsofts focus on tailored industry solutions sets it apart. By partnering with established leaders like Siemens, Bayer, and Rockwell Automation, Microsoft is positioning itself to be a key player in the digitization of industries that are under growing pressure to modernize.The availability of these models through Azure AI Studio and Microsoft Copilot Studio also speaks to Microsofts broader vision of making AI accessible not just to tech companies, but to businesses in every sector. By integrating AI into the day-to-day operations of industries like manufacturing, agriculture, and finance, Microsoft is helping to bring AI out of the lab and into the real world.As global manufacturers, agricultural producers, and financial institutions face increasing pressures from supply chain disruptions, sustainability goals, and regulatory demands, Microsofts industry-specific AI offerings could become essential tools in helping them adapt and thrive in a fast-changing world.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 10 Ansichten
-
VENTUREBEAT.COMHow Writer has built an enterprise platform Blueprint that does the AI for youWriter CEO May Habib explains the four things companies need to know before setting off on their agentic AI journey.Read More0 Kommentare 0 Anteile 8 Ansichten
-
VENTUREBEAT.COMDemocratizing finance: Spectral Labs and the autonomous finance movementCONTRIBUTOR CONTENT: From 2024 to 2031, there will be an annual growth of 26.00% in AI and blockchain and Spectral Labs is taking part in this revolution. Spectral Labs is on a mission to change the way users interact with decentralized finance (DeFi) using AI-powered onchain agents. These autonomous agents allow users to do complex financial tasksRead More0 Kommentare 0 Anteile 9 Ansichten
-
VENTUREBEAT.COMGoogle DeepMind open-sources AlphaFold 3, ushering in a new era for drug discovery and molecular biologyJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreGoogle DeepMind has unexpectedly released the source code and model weights of AlphaFold 3 for academic use, marking a significant advance that could accelerate scientific discovery and drug development. The surprise announcement comes just weeks after the systems creators, Demis Hassabis and John Jumper, were awarded the 2024 Nobel Prize in Chemistry for their work on protein structure prediction.AlphaFold 3 represents a quantum leap beyond its predecessors. While AlphaFold 2 could predict protein structures, version 3 can model the complex interactions between proteins, DNA, RNA, and small molecules the fundamental processes of life. This matters because understanding these molecular interactions drives modern drug discovery and disease treatment. Traditional methods of studying these interactions often require months of laboratory work and millions in research funding with no guarantee of success.The systems ability to predict how proteins interact with DNA, RNA, and small molecules transforms it from a specialized tool into a comprehensive solution for studying molecular biology. This broader capability opens new paths for understanding cellular processes, from gene regulation to drug metabolism, at a scale previously out of reach.Silicon Valley meets science: The complex path to open-source AIThe timing of the release highlights an important tension in modern scientific research. When AlphaFold 3 debuted in May, DeepMinds decision to withhold the code while offering limited access through a web interface drew criticism from researchers. The controversy exposed a key challenge in AI research: how to balance open science with commercial interests, particularly as companies like DeepMinds sister organization Isomorphic Labs work to develop new drugs using these advances.The open-source release offers a middle path. While the code is freely available under a Creative Commons license, access to the crucial model weights requires Googles explicit permission for academic use. This approach attempts to satisfy both scientific and commercial needs though some researchers argue it should go further.Breaking the code: How DeepMinds AI rewrites molecular scienceThe technical advances in AlphaFold 3 set it apart. The systems diffusion-based approach, which works directly with atomic coordinates, represents a fundamental shift in molecular modeling. Unlike previous versions that needed special handling for different molecule types, AlphaFold 3s framework aligns with the basic physics of molecular interactions. This makes the system both more efficient and more reliable when studying new types of molecular interactions.Notably, AlphaFold 3s accuracy in predicting protein-ligand interactions exceeds traditional physics-based methods, even without structural input information. This marks an important shift in computational biology: AI methods now outperform our best physics-based models in understanding how molecules interact.Beyond the lab: AlphaFold 3s promise and pitfalls in medicineThe impact on drug discovery and development will be substantial. While commercial restrictions currently limit pharmaceutical applications, the academic research enabled by this release will advance our understanding of disease mechanisms and drug interactions. The systems improved accuracy in predicting antibody-antigen interactions could accelerate therapeutic antibody development, an increasingly important area in pharmaceutical research.Of course, challenges remain. The system sometimes produces incorrect structures in disordered regions and can only predict static structures rather than molecular motion. These limitations show that while AI tools like AlphaFold 3 advance the field, they work best alongside traditional experimental methods.The release of AlphaFold 3 represents an important step forward in AI-powered science. Its impact will extend beyond drug discovery and molecular biology. As researchers apply this tool to various challenges from designing enzymes to developing resilient crops well see new applications in computational biology.The true test of AlphaFold 3 lies ahead in its practical impact on scientific discovery and human health. As researchers worldwide begin using this powerful tool, we may see faster progress in understanding and treating disease than ever before.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 13 Ansichten
-
VENTUREBEAT.COMNintendo unveils Donkey Kong Country at Super Nintendo World in JapanNintendos famous game designer Shigeru Miyamoto showed off the Donkey Kong Country area at Super Nintendo World.The unveiling is a major addition to the theme park within a theme park that opened at Universal Studios Japan in March 2021.Nintendo has opened or is opening its theme parks in Hollywood, Osaka, Orlando (2025) and Singapore (TBD). The aim is to widen its funnel for consumers and get more people familiar with Nintendos intellectual property than gamers alone.The entrance to Donkey Kong Country in Super Nintendo World in Japan.The new area is part of the Super Mario World section of the theme park in Japan. As you move from one area to the Donkey Kong Country, Miyamoto pointed out that you move from blocks to rocks as you go through a tunnel. With a jungle theme, Miyamoto showed off the Donkey Kong Tree House and the Golden Temple. On the temple, theres a face of a monkey with steam spewing out of its mouth.He played large conga drums that were connected to lights. Three people can play at a time. The drums light up when you hit them. If you hit the right sequence with conga sticks for long enough, a Rambi character appears above the drums. You can play conga drums at Donkey Kong Country.There are some secret locations and reward hunts. Around the park are letter blocks spelling out the Donkey Kong name, and you can scan it using a new Donkey Kong power-up band, which looks like a big watch. You can see the items you earn in the Universal Studios app.You can get food at the Jungle Beat Shakes shop, where you can get banana-flavored DK Crush Sunday and a DK hot dog.Miyamoto showed the Donkey Kong Tree House and all of its bananas up close, and a mascot Donkey Kong character walked up to him. You can take pictures with the mascot throughout the area.Theres a Funky Kong fly-n-buy plane where you can shop for Donkey Kong merchandise. And the flagship attraction the land is a Golden Temple ride, where you ride in a mine cart across the jungle. Inside, its a golden color with some images from Nintendo lore on the walls. Cranky Kong ands Squawks talk to those waiting in line, urging them to chase the Tiki Take Tribe away and protect the golden banana. The ride is a kind of gentle roller coaster, as far as I can tell. Donkey Kong made his debut 40 years ago and it was the first game Miyamoto created. The Golden Temple ride at Donkey Kong Country.Nintendo hasnt said how much money it has made from its mini-theme parks, but it called them out in its earnings call as a part of its overall business strategy. The opening date will be December 11, 2024, in Japans Super Nintendo World. It will also open at in Universal Epic Universe in Orlando, Florida, with both the Mario and Donkey Kong areas.GB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 12 Ansichten
-
VENTUREBEAT.COMWorldofWarshipsClashofTitans history show debuts on Pluto TVTCD and Pluto TV launched the premier today of WorldofWarshipsClashofTitans, an eight-part streaming documentary series.Read More0 Kommentare 0 Anteile 14 Ansichten
-
VENTUREBEAT.COMIndias game market could grow from $3.8B to $9.2B by 2029 | LumikaiIndia's game market could grow from $3.8 billion in 2024 to $9.2 billion by 2029, according to a report by Lumikai.Read More0 Kommentare 0 Anteile 13 Ansichten
-
VENTUREBEAT.COMAGI is coming faster than we think we must get ready nowAs we are on the brink of breakthroughs in AGI and superintelligence, we need to assess whether we are truly ready for this transformation.Read More0 Kommentare 0 Anteile 12 Ansichten
-
VENTUREBEAT.COMIdentity management in 2025: 4 ways security teams can address gaps and risksJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreWhile 99% of businesses plan to invest more in security, only 52% have fully implemented multi-factor authentication (MFA), and only 41% adhere to the principle of least privilege in access management.Adversaries, including nation-states, state-funded attackers and cybercrime gangs, continue to sharpen their tradecraft using generative AI, machine learning (ML) and a growing AI arsenal to launch increasingly sophisticated identity attacks. Deepfakes, tightly orchestrated social engineering and AI-based identity attacks, synthetic fraud, living-of-the-land (LOTL) attacks and many other technologies and tactics signal that security teams are in danger of losing the war against adversarial AI.Identity remains one of the hairiest areas of securityin really basic terms: you need authorization (authZ: the right to access) and authentication (authN: the means to access). In computer security, we work really hard to marry authZ and authN, Merritt Baer, CISO at Reco.ai, told VentureBeat in a recent interview.What we have to do is make sure that we use AI natively for defenses because you cannot go out and fight those AI weaponization attacks from adversaries at a human scale. You have to do it at machine scale, Jeetu Patel, Ciscos executive vice president and chief product officer, told VentureBeat in an interview earlier this year.The bottom line is that identities continue to be under siege, and adversaries continued efforts to improve AI-based tradecraft targeting weak identity security are fast-growing threats. The Identity Defined Security Alliance (IDSA) recent report, 2024 Trends in Securing Digital Identities, reflects how vulnerable identities are and how quickly adversaries are creating new attack strategies to exploit them.The siege on identities is actual and growing.Cloud, identity and remote management tools and legitimate credentials are where the adversary has been moving because its too hard to operate unconstrained on the endpoint. Why try to bypass and deal with a sophisticated platform like CrowdStrike on the endpoint when you could log in as an admin user? Elia Zaitsev, CTO of CrowdStrike, told VentureBeat during a recent interview.The overwhelming majority of businesses, 90%, have experienced at least one identity-related intrusion and breach attempt in the last twelve months. The IDSA also found that 84% of companies suffered a direct business impact this year, up from 68% in 2023.The future will not be televised; it will be contextual. Its rare that a bad actor is burning a 0-day (new) exploit to get accesswhy use something special when you can use the front door? They are almost always working with valid credentials, Baer says.80% of the attacks that we see have an identity-based element to the tradecraft that the adversary uses; its a key element, Michael Sentonas, president of CrowdStrike, told the audience at Fal.Con 2024 this year. Sentonas continued, saying, Sophisticated groups like Scattered Spider, like Cozy Bear, show us how adversaries exploit identity. They use password spray, they use phishing, and they use MTM frameworks. They steal legitimate creds and register their own devices.Why identity-based attacks are proliferatingIdentity-based attacks are surging this year, with a 160% rise in attempts to collect credentials via cloud instance metadata APIs and a 583% spike in Kerberoasting attacks, according to CrowdStrikes 2023 Threat Hunting Report.The all-out attacks on identities emphasize the need for a more adaptive, identity-first security strategy that reduces risk and moves beyond legacy perimeter-based approaches:Unchecked human and machine identity sprawl is rapidly expanding threat surfaces. IDSA found that 81% of IT and security leaders say their organizations number of identities has doubled over the last decade, further multiplying the number of potential attack surfaces. Over half the executives interviewed, 57%, consider managing identity sprawl a primary focus going into 2025, and 93% are taking steps to get in control of it. With machine identities continuing to increase, security teams need to have a strategy in place for managing them as well. The typical organization has 45 times more machine identities than human ones, and many organizations do not even know exactly how many they have. What makes managing machine identities challenging is factoring in the diverse needs of DevOps, cybersecurity, IT, IAM and CIO teams.Growing incidence of adversarial AI-driven attacks launched with deepfake and impersonation-based phishing techniques. Deepfakes typify the cutting edge of adversarial AI attacks, achieving a 3,000% increase last year alone. Its projected that deepfake incidents will go up by 50% to 60% in 2024, with 140,000-150,000 cases globally predicted this year. Adversarial AI is creating new attack vectors no one sees coming and creating a new, more complex, and nuanced threatscape that prioritizes identity-driven attacks. Ivantis latest research finds that 30% of enterprises have no plans in place for how they will identify and defend against adversarial AI attacks, and 74% of enterprises surveyed already see evidence of AI-powered threats. Of the majority of CISOs, CIOs, and IT leaders participating in the study, 60% say they are afraid their enterprises are not prepared to defend against AI-powered threats and attacks.More active targeting of identity platforms starting with Microsoft Active Directory (AD). Every adversary knows that the quicker they can take control of AD, the faster they control an entire company. From giving themselves admin rights to deleting all other admin accounts to insulate themselves during an attack further, adversaries know that locking down AD locks down a business. Once AD is under control, adversaries move laterally across networks and install ransomware, exfiltrate valuable data and have been known to reprogram ACH accounts. Outbound payments go to shadow accounts the attackers control.Over-reliance on single-factor authentication for remote and hybrid workers and not enforcing multi-factor authentication to the app level company-wide. Recent research on authentication trends finds that 73% of users reuse passwords across multiple accounts, and password sharing is rampant across enterprises today. Add to that the fact that privileged account credentials for remote workers are not monitored and the conditions are created for privileged account misuse, the cause of 74% of identity-based intrusions this year.The Telesign Trust Index shows that when it comes to getting cyber hygiene right, there is valid cause for concern. Their study found that 99% of successful digital intrusions start when accounts have multi-factor authentication (MFA) turned off. The emergence of AI over the past year has brought the importance of trust in the digital world to the forefront, Christophe Van de Weyer, CEO of Telesign, told VentureBeat during a recent interview. As AI continues to advance and become more accessible, it is crucial that we prioritize trust and security to protect the integrity of personal and institutional data. At Telesign, we are committed to leveraging AI and ML technologies to combat digital fraud, ensuring a more secure and trustworthy digital environment for all.A well-executed MFA plan will require the user to present a combination of something they know, something they have, or some form of a biometric factor. One of the primary reasons why so many Snowflake customers were breached is that MFA was not enabled by default. CISA provides a helpful fact sheet on MFA that defines the specifics of why its important and how it works.Ransomware is being initiated more often using stolen credentials, fueling a ransomware-as-a-service boom. VentureBeat continues to see ransomware attacks growing at an exponential rate across healthcare and manufacturing businesses as adversaries know that interrupting their services leads to larger ransomware payout multiples. Deloittes 2024 Cyber Threat Trends Report found that 44.7% of all breaches involve stolen credentials as the initial attack vector. Credential-based ransomware attacks are notorious for creating operational chaos and, consequently, significant financial losses. Ransomware-as-a-Service (RaaS) attacks continue to increase, as adversaries are actively phishing target companies to get their privileged access credentials.Practical steps security leaders can take now for small teamsSecurity teams and the leaders supporting them need to start with the assumption that their companies have already been breached or are about to be. Thats an essential first step to begin defending identities and the attack surface adversaries target to get to them.I started a company because this is a pain point. Its really hard to manageaccess permissions at scale. And you cant afford to get it wrong with high-privileged users (execs) who are, by the way, the same folks who need access to their email immediately! on a business trip in a foreign country, says Kevin Jackson, CEO of Level 6 Cybersecurity.The following are practical steps any security leader can take to protect identities across their business:Audit and revoke any access privileges for former employees, contractors and admins Security teams need to get in the practice of regularly auditing all access privileges, especially those of administrators, to see if theyre still valid and if the person is still with the company. Its the best muscle memory for any security team to get in the habit of strengthening because its proven to stop breaches. Go hunting for zombie accounts and credentials regularly and consider how genAI can be used to create scripts to automate this process. Insider attacks are a nightmare for security teams and the CISOs leading them. Add to that the fact that 92% of security leaders say internal attacks are as complex or more challenging to identify than external attacks, and the need to get in control of access privileges becomes clear. Nearly all IAM providers have automated anomaly detection tools that can help enforce a thorough identity and access privilege clean-up. VentureBeat has learned that approximately 60% of companies are paying for this feature in their cybersecurity suites and are not using it.Make MFA the standard with no exceptions and consider how user personas and roles with access to admin rights and sensitive data can also have biometrics and passwordless authentication layered in. Security teams will need to lean on their vendors to get this right, as the situation at Snowflake and now Okta logins with 52-character-long user names have been allowing login session access without providing a password. Gartner projects that by next year, 50% of the workforce will use passwordless authentication. Leading passwordless authentication providers include Microsoft Azure Active Directory (Azure AD), OneLogin Workforce Identity, Thales SafeNet Trusted Access, and Windows Hello for Business. Of these, Ivantis Zero Sign-On (ZSO) is integrated into its UEM platform, combines passwordless authentication FIDO2 protocols, and supports biometrics, including Apples Face ID as a secondary authentication factor.Get just-in-time (JIT) provisioning right as a core part of providing least privileged access. Just-in-Time (JIT) provisioning is a key element of zero-trust architectures, designed to reduce access risks by limiting resource permissions to specific durations and roles. By configuring JIT sessions based on role, workload, and data classification, organizations can further control and protect sensitive assets. The recently launched Ivanti Neurons for App Control complements JIT security measures by strengthening endpoint security through application control. The solution blocks unauthorized applications by verifying file ownership and applying granular privilege management, helping to prevent malware and zero-day attacks.Prevent adversaries and potential insider threats from assuming machine roles in AWS by configuring its IAM for least privileged access. VentureBeat has learned that cyberattacks on AWS instances are increasing, and attackers are taking on the identities of machine roles. Be sure to avoid mixing human and machine roles in DevOps, engineering, production, and AWS contractors. If role assignments have errors in them, a rogue employee or contractor can and has stolen confidential data from an AWS instance without anyone knowing. Audit transactions and enforce least privileged access to prevent this type of intrusion. There are configurable options in AWS Identity and Access Management to ensure this level of protection.Predicting the future of identity management in 2025Every security team needs to assume an identity-driven breach has happened or is about to if theyre going to be ready for the challenges of 2025. Enforcing least privileged access, a core component of zero trust, and a proven strategy for shutting down a breach needs to be a priority. Enforcing JIT provisioning is also table stakes.More security teams and their leaders need to take vendors to task and hold them accountable for their platforms and apps supporting MFA and advanced authentication techniques.Theres no excuse for shipping a cybersecurity project in 2025 without MFA installed and enabled by default. Complex cloud database platforms like Snowflake point to why this has to be the new normal. Oktas latest oversight of allowing 52-character user names to bypass the need for a password just shows these companies need to work harder and more diligently to connect their engineering, quality, and red-teaming internally so they dont put customers and their businesses at risk. VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Kommentare 0 Anteile 14 Ansichten
-
VENTUREBEAT.COMMultimodal RAG is growing, heres the best way to get startedEnterprises want to use RAG systems to search for more than just text files, multimodal embeddings models help them do that.Read More0 Kommentare 0 Anteile 14 Ansichten
-
VENTUREBEAT.COMArcane Season 2 debuts with more action, dark story and beautiful animation | previewArcane Season 2 lives up to the beautiful imagery and deep story of Arcane Season One, which debuted in 2021.Read More0 Kommentare 0 Anteile 13 Ansichten
Mehr Artikel