• Nvidia launches 'Verified Priority Access' for scarce RTX 5090 and 5080 GPUs
    www.techspot.com
    What just happened? Nvidia has announced a new initiative to provide fans with a chance to purchase the highly coveted and very scarce RTX 5090 and 5080 Founders Edition graphics cards. The program, dubbed "Verified Priority Access," offers a select group of enthusiasts the opportunity to acquire these elusive two-slot, small-form-factor-friendly GPUs by filling out an online form. This latest iteration of the Verified Priority Access program differs from its predecessor, which was used for the RTX 4090 release, as the previous version was invitation-only and based on pre-selection.To be eligible for consideration, applicants must have an existing Nvidia account created before January 30. The application process includes specifying a preference between the RTX 5090 and 5080 models. Nvidia will then employ an algorithm to assess the authenticity of each applicant's gamer status, likely by analyzing their usage patterns of Nvidia applications and GeForce Experience.The company has stated that invitations will begin to be distributed next week. However, Nvidia has not disclosed the number of graphics cards allocated to this program, making it difficult to gauge its effectiveness in getting these components into the hands of genuine gaming enthusiasts rather than scalpers.Successful applicants will be limited to purchasing only one graphics card through this channel.While this program represents a novel approach to graphics card distribution, its impact on the wider availability issues plaguing the high-end GPU market remains to be seen.The launch of Nvidia's GeForce RTX 5090 and 5080 graphics cards has been marred by severe scarcity, leaving many enthusiasts frustrated and empty-handed. This shortage, which began on the release date, has persisted for weeks, with retailers struggling to maintain stock and prices soaring well above MSRP. // Related StoriesBest Buy, a primary seller of Founders Edition cards, saw its entire stock of RTX 5090 FE cards sell out within minutes of launch. Similarly, Newegg's inventory of RTX 5080 AIB cards was depleted within seconds, leaving customers staring at "out of stock" notifications almost immediately after the cards went live.The situation wasn't much better in other regions. In the UK, retailer Overclockers reported having only "single digits" of RTX 5090 cards in stock, with RTX 5080 units numbering in the "few hundreds." This scarcity led to extreme measures in some markets, including Japan, where a retailer resorted to selling lottery tickets for the chance to purchase an RTX 5080 or 5090 on launch day.Several factors have contributed to this shortage. Nvidia itself warned of "significant demand" for these cards prior to launch, anticipating potential stock-outs. The company cited manufacturing challenges, including delays related to the Lunar New Year, which coincided with the launch period.Another contributing factor is the cards' appeal beyond the gaming market. The RTX 5090, in particular, has garnered interest from smaller AI companies for use in training large language models, further straining the already limited supply.The scarcity has led to a thriving scalper market, with some offering "guaranteed" slots for RTX 5090 GPUs at prices more than triple the MSRP. This has made it even more challenging for genuine enthusiasts to acquire the cards at reasonable prices.
    0 Comentários ·0 Compartilhamentos ·77 Visualizações
  • Meze debuts gorgeous new open-back planar headphones
    www.digitaltrends.com
    Meze Audios latest premium open-back wired headphones feature new custom-designed planar magnetic drivers technology that audiophiles have long revered for its clarity and ultra-low distortion. But its the design of the new $2,000 Meze Poet that might woo would-be buyers.A lot of planar headphones tend to be bulky affairs with designs that dont exactly offer as much for the eyes as they do the ears. The Meze Poet, are decidedly different, with copper-toned and finely patterned steel grilles, magnesium earcup chassis, a titanium alloy frame, and a suede leather headband.Meze AudioThe copper accents extend into the height rods, and continue into the detachable, hand-braided copper cable.Recommended VideosThe earcushions attach magnetically, making them easily replaceable when they eventually breakdown from use.Please enable Javascript to view this contentFor the Poet, Meze has once again partnered with Ukrainian electro-acoustics specialists, Rinaro Isodynamics. Inside each earcup lives an MZ6 Isodynamic Hybrid Array driver a hybrid magnet array linked to an ultra-low mass planar diaphragm (0.06g). Its a similar unit to the one that powers Mezes $4,000 Elite Tungsten flagship headphones.Meze AudioThose drivers deliver some impressive claimed specs:Frequency range: 4 Hz 96 kHzSensitivity: 101 dB SPL/mW at 1kHzMaximum SPL: >130 dB SPLTotal harmonic distortion (THD): <0.05%Plus, the Poet, with an impedance of just 55 ohm, should be remarkably easy to drive, even from a standard 3.5mm jack on a laptop.Meze AudioFor a more poetic description of the Poets sound signature, heres Mezes take: a sonic experience with lush, airy vocals and precise bass impact, balancing low-end depth with delicate high-frequency clarity. The headphones natural tonality is enhanced by subtle hints of sparkle, providing a lively yet effortless listening experience.$2,000 is a big investment for anyone. But Meze Audio helps cushion that blow literally by including what looks like a very sturdy, PC-ABS headphone hard case. Theres also a separate synthetic leather pouch to store your cables.The included cable uses 3.5mm mono jacks at the earcup connectors and a standard, unbalanced 6.3mm (1/4-inch) source connection, however Meze sells several optional accessory cables that can provide 2.5 or 4.4mm balanced connections if needed.For a much more affordable way into the open-back audiophile category, check out the Meze Audio 105 AER. They may not use fancy planar drivers, but they look great and sound fantastic.Editors Recommendations
    0 Comentários ·0 Compartilhamentos ·71 Visualizações
  • This cool stealth G-Shock has a unique feature that transforms the watch
    www.digitaltrends.com
    Table of ContentsTable of ContentsWhat makes it special?An all-new lookHow about the watch?All of the Casio G-Shock watches I own (and I own quite a few), are attached to their own, special band. Its an integrated part of the design, and even if I wanted to, theres no obvious easy way to change it out for another. Thats what makes the limited edition G-Shock x ASRV DW-6900 so special, and unique among the majority of G-Shocks. The lugs are specially made to accept different bands, and after I saw it for the first time, I couldnt wait to try it out.Andy Boxall / Digital TrendsCasio is renowned for striking up fantastic partnerships for its special edition G-Shock watches, and this is one of the most successful in a while, as it brings more than just a new look with it. Casio has partnered with Californian cult althleisure brand ASRV, which produces premium sports and training wear with the tagline, Relentless Pursuit. I hadnt heard of them before seeing the G-Shock watch, and love the way such collaborations often introduce me to new, exciting companies. Casio did the same for me with its partnership with South Korean fashion brand This is Never That.Recommended VideosWhen the ASRV x G-Shock arrived, I was surprised at the size of the presentation box, but its oversized for a reason it contains much more than just a watch. Based on the established, and very popular DW-6900 G-Shock watch, the ASRV edition is in sleek matte black with a deep red button on the front to activate the backlight. The triple graph complications and main LCD display also have a red tint, making the watch purposeful and menacing.1 of 5Andy Boxall / Digital Trends Andy Boxall / Digital Trends Andy Boxall / Digital Trends Andy Boxall / Digital Trends Andy Boxall / Digital Trends Whats really new are the lugs. Instead of leading into a normal DW-6900 band, theyre open ended ready for a fabric strap to be threaded through, and there are three included in the box to choose from. The design is typically G-Shock. The lugs are perfectly sized for the included straps, they slip through without any effort, and the strap keeps the watch case tightly in position. The lugs feel tough, just as youd expect from G-Shock, and are secured on the case using screw-in pins. They arent going anywhere, and neither is the strap.RelatedAndy Boxall / Digital TrendsEven when established G-Shock fans thread an ASRV fabric strap onto the DW-6900, theyre going to be amazed at how different it makes the watch look. Theres a tactical, military look to it, like it means business, but the 6900 is a streetwear icon, and that makes it seriously cool. The material is thick, giving you confidence in its durability, but its also soft and comfortable. Its very different to the few 6900-series watches released with Cordura NATO-style fabric bands in the past, which werent as comfortable as the resin or rubber straps.Each band is suitably different to the other, allowing you to really tailor the style to your own tastes. There are reflective elements on the bands too, with the white version being the brightest, and the ASRV-branded one being the most subtle. The band emblazoned with the phrase, Only Those Who Risk Are Free has an almost heavy metal aesthetic to it, but the words do disappear once the band is on the watch. The ASRV-branded version really suits the matte black case, and helps show off the red highlights even more.1 of 5Andy Boxall / Digital Trends Andy Boxall / Digital Trends Andy Boxall / Digital Trends The G-Shock x ASRV DW-6900 with the G-Shock x Bamford DW-6900, showing a comparison of lug and band design Andy Boxall / Digital Trends Andy Boxall / Digital Trends The bands are very comfortable, and make the DW-6900 more wearable than when its attached to its usual fixed band. You can wear it more tightly without sweat buildup, and because its fairly flexible, it doesnt get caught on your sleeve or cuff as much either.But the three bands and the watch case arent the only things in the box. In a wonderful throwback to 6900 model watches of the past, there are two bull bar bezel protectors included, which clip on to the case. It has been a while since Ive seen these on a new G-Shock, and its fantastic to see them as an option. Its entirely down to you if you want to continue the murdered-out style with the black bar and matching band, or add something new with the silver version, which goes well with the white reflective strap.Andy Boxall / Digital TrendsThe G-Shock x ASRV DW-6900 doesnt have a Bluetooth connection, a feature becoming more commonplace on all G-Shock watches, nor does it have solar charging, the Multi-Band 6 atomic timekeeping feature, or anything other than the basics. But what it lacks in high-tech functionality, it makes up for in design. Most people are used to changing bands on an Apple Watch, or on their own mechanical watches, but its a true rarity on a G-Shock.This is still a true G-Shock, so it meets ISO 1413 standards for shock and impact resistance, plus it has 200 meters of water resistance, and a battery that should last five years before it needs replacing. Turn the watch case over and theres a custom engraving on the back, plus when you press the backlight it highlights a very subtle Relentless Pursuit behind the numbers. Its everything you want from a G-Shock, with a feature thats as rare as the limited edition watch itself.Andy Boxall / Digital TrendsIve collected G-Shocks for years, and this is the first Ive come across with lugs suitable for interchangeable bands, but am aware there are third-party accessories that make it possible, and also ways to buy bull bars too. But what I love about the ASRV model is everything comes in the box. Its all there, ready to go, officially produced and endorsed by Casio. Ive been somewhat underwhelmed by recent G-Shock collaborations, but the ASRV model has reminded me about what I love about the brand, its choice of partners, and the DW-6900 watch too.This is a limited edition set, but at the time of writing you can still get one for yourself through ASRVs website, where it costs $248 or 200 British pounds. If youre after a smarter G-Shock take a look at the brilliant GA-B2100, the hybrid GBD-H2000 sports G-Shock, or you can see our favorite full smartwatches here.Editors Recommendations
    0 Comentários ·0 Compartilhamentos ·80 Visualizações
  • NetEase Resumes Profit Growth but Revenue Slips
    www.wsj.com
    The Chinese videogame giant could see stronger growth momentum this year thanks to a solid game pipeline.
    0 Comentários ·0 Compartilhamentos ·85 Visualizações
  • Tech, Media & Telecom Roundup: Market Talk
    www.wsj.com
    Read about Baidu, Singtel and more in the latest Market Talks covering Technology, Media and Telecom.
    0 Comentários ·0 Compartilhamentos ·91 Visualizações
  • Is a Small Language Model Better Than an LLM for You?
    www.informationweek.com
    Pam Baker, Contributing WriterFebruary 20, 202511 Min ReadTithi Luadthong via Alamy StockWhile its tempting to brush aside seemingly minimal AI model token costs, thats only one line item in the total cost of ownership (TCO) calculation. Still, managing model costs is the right place to start in getting control over the end sum. Choosing the right sized model for a given task is imperative as the first step. But its also important to remember that when it comes to AI models, bigger is not always better and smaller is not always smarter.Small language models (SLMs) and large language models (LLMs) are both AI-based models, but they serve different purposes, says Atalia Horenshtien, head of the data and AI practice in North America at Customertimes, a digital consultancy firm.SLMs are compact models, efficient, and tailored for specific tasks and domains. LLMs, are massive models, require significant resources, shine in more complex scenarios and fit general and versatile cases, Horenshtien adds.While it makes sense in terms of performance to choose the right size model for the job, there are some who would argue model size isnt much of a cost argument even though large models cost more than smaller ones.Focusing on the price of using an LLM seems a bit misguided. If it is for internal use within a company, the cost usually is lass than 1% of what you pay your employees. OpenAI, for example, charges $60 per month for an Enterprise GPT license for an employee if you sign up for a few hundred. Most white-collar employees are paid more than 100x that, and even more as fully loaded costs, says Kaj van de Loo, CPTO, CTO, and chief innovation officer at UserTesting.Related:Instead, this argument goes, the cost should be viewed in a different light.Do you think using an LLM will make the employee more than 1% more productive? I do, in every case I have come across. It [focusing on the price] is like trying to make a business case for using email or video conferencing. It is not worth the time, van de Loo adds.Size Matters but Maybe Not as You ExpectOn the surface, arguing about model sizes seems a bit like splitting hairs. After all, a small language model is still typically large. A SLM is generally defined as having fewer than 10 billion parameters. But that leaves a lot of leeway too, so sometimes an SLM can have only a few thousand parameters although most people will define an SLM as having between 1 billion to 10 billion parameters.As a matter of reference, medium language models (MLM) are generally defined as having between 10B and 100B parameters while large language models have more than 100 billion parameters. Sometimes MLMs are lumped into the LLM category too, because whats a few extra billion parameters, really? Suffice it to say, theyre all big with some being bigger than others.Related:In case youre wondering, parameters are internal variables or learning control settings. They enable models to learn, but adding more of them adds more complexity too.Borrowing from hardware terminology, an LLM is like a systems general-purpose CPU, while SLMs often resemble ASICs -- application-specific chips optimized for specific tasks, says Professor Eran Yahav, an associate professor at the computer science department at the Technion Israel Institute of Technology and a distinguished expert in AI and software development. Yahav has a research background in static program analysis, program synthesis, and program verification from his roles at IBM Research and Technion. Currently, he is CTO and co-founder of Tabnine, an AI-coding assistant for software developers.To reduce issues and level-up the advantages in both large and small models, many companies do not choose one size over the other.In practice, systems leverage both: SLMs excel in cost, latency, and accuracy for specific tasks, while LLMs ensure versatility and adaptability, adds Yahav.Related:As a general rule, the main differences in model sizes pertain to performance, use cases, and resource consumption levels. But creative use of any sized model can easily smudge the line between them.SLMs are faster and cheaper, making them appealing for specific, well-defined use cases. They can, however, be fine-tuned to outperform LLMs and used to build an agentic workflow, which brings together several different agents -- each of which is a model -- to accomplish a task. Each model has a narrow task, but collectively they can outperform an LLM, explains, Mark Lawyer, RWS' president of regulated industries and linguistic AI.Theres a caveat in defining SLMs versus LLMs in terms of task-specific performance, too.The distinction between large and small models isnt clearly defined yet, says Roman Eloshvili, founder and CEO of XData Group, a B2B software development company that exclusively serves banks. You could say that many SLMs from major players are essentially simplified versions of LLMs, just less powerful due to having fewer parameters. And they are not always designed exclusively for narrow tasks, either.The ongoing evolution of generative AI is also muddying the issue.Advancements in generative AI have been so rapid that models classified as SLMs today were considered LLMs just a year ago. Interestingly, many modern LLMs leverage a mixture of experts architecture, where smaller specialized language models handle specific tasks or domains. This means that behind the scenes SLMs often play a critical role in powering the functionality of LLMs, says Rogers Jeffrey Leo John, co-founder and CTO of DataChat, a no-code, generative AI platform for instant analytics.In for a Penny, in for a PoundSLMs are the clear favorite when the bottom line is the top consideration. They are also the only choice when a small form factor comes into play.Since the SLMs are smaller, their inference cycle is faster. They also require less compute, and theyre likely your only option if you need to run the model on an edge device, says Sean Falconer, AI entrepreneur in residence at Confluent.However, the cost differential between model sizes comes from more than direct model costs like token costs and such.Unforeseen operational costs often creep in. When using complex prompts or big outputs, your bills may inflate. Background API calls can also very quickly add up if youre embedding data or leveraging libraries like ReAct to integrate models. It is for this reason scaling from prototype to production often leads to what we call bill shock, says Steve Fleurant, CEO at Clair Services.Theres a whole pile of other associated costs to consider in the total cost of ownership calculation too.It is clear the long-term operational costs of LLMs will be more than just software capabilities. For now, we are seeing indications that there is an uptick in managed service provider support for data management, tagging, cleansing and governance work, and we expect that trend to grow in the coming months and years. LLMs, and AI more broadly, put immense pressure on an organization to validate and organize data and make it available to support the models, but most large enterprises have underinvested in this work over the last decades, says Alex Bakker, distinguished analyst, with global technology research and advisory firm ISG.Over time, as organizations improve their data architectures and modernize their data assets, the overhead of remediation work will likely decrease, but costs associated with the increased use of data -- higher network consumption, greater hardware requirements for supporting computations, etc. -- will increase. Overall, the advent of AI probably represents a step-change increase in the amount of money organizations spend on their data, Bakker adds.Other standard business costs apply to models, too, and are adding strain to budgets. For example, backup models are a necessity and an additional cost.Risk management strategies must account for provider-specific characteristics. Organizations using OpenAI's premium models often maintain Anthropic or Google alternatives as backups, despite the price differential. This redundancy adds to overall costs but is essential for business continuity, says David Eller, group data product manager at Indicium.There are other line items more specific to models that are bearing down on company budgets too.Even though there are API access fees to consider, the synthesis of the cost of operational overhead, fine-tuning, and compute resources can easily supersede it. The ownership cost should be considered thoroughly before implementation of AI technologies in the organization, says Cache Merrill, founder of Zibtek, a software development company.Merrill notes the following as specific costs to look and budget for:Installation costs: Running the fine-tuned or proprietary LLMs may require NVIDIA A100 or H100 Graphics Processing Units which can cost $25,000+. In contrast, enterprise-grade cloud computing services costs between $5,000 - $15,000 for consistent usage on its own.Model fine-tuning: The construction of a custom model LLM can cost tens of thousands of dollars or more based on the various parameters of the dataset and constructional aspects.Software maintenance: With regular updates of models this software will also require security checks and compliance as well as increasing cost at each scale, which is usually neglected at the initial stages of the project.Human oversight: Employing experts in a particular field to review and advise LLM results is becoming more common, which adds to the employees wage payout.Some of the aforementioned costs are reduced by the use of SLMs but some are not, or not significantly so. But given that many organizations use both large and small models, and/or an assortment of model types, its fair to say that AI isnt cheap, and we havent yet touched on energy and environmental costs. The best advice is to first establish solid use cases and choose models that precisely fit the tasks and a solid lead towards the ROI youre aiming for.SLM, LLM, and Hybrid ExamplesIf youre unsure of or have yet experimented with -- small language models, here are a few examples to give you a starting point.Horenshtien says SLM examples on her list include Mistral 7B, LLaMa 3, Phi 3, and Gemma. Top LLMs on her list are GPT-4, Claude 3.5, Falcon, Gemini, and Command R.Examples of SLM vs LLM use cases in the real-world that Horenshtien says her company sees include:In manufacturing, SLMs can predict equipment failures, while LLMs provide real-time insights from IoT data.In retail, SLMs personalize recommendations; LLMs power virtual shopping assistants.In healthcare, SLMs classify records, while LLMs summarize medical research for clinicians.Meanwhile, Eloshvili says that some of the more solid and affordable versions [of SLMs and other LLM alternatives], in my opinion, would include Google Nano, Meta Llama 3 Small, Mistral 7B and Microsoft Phi-3 Mini."But everyone understandably has their own list of SLMs based on varying criteria of importance to the beholder.For example, Joseph Regensburger, vice president of research at Immuta, says some cost-efficient SLM options include GPT-4o-mini, Gemini-flash, AWS Titan Text Lite, and Titan Text Express.""We use both LLMs and SLMs. The choice between these two models is use-case-specific. We have found SLMs are sufficiently effective for a number of traditional natural language processing tasks, such as sentence analysis. SLMs tend to handle the ambiguities inherent in language better than rule-based NLP approaches, at the same time offering a more cost-effective solution than LLMs. We have found that we need LLMs for tasks involving logical inference, text generation, or complex translation tasks," Regensburger explains.Rogers Jeffrey Leo John urges companies to consider SLM open-source models too. If you are looking for small LLMs for your task, here are some good open- source/open-weight models to start with: Mistral 7B, Microsoft Phi, Falcon 7B, Google Gemma, and LLama3 8B.And if youre looking for some novel approaches to SLMs or a few other alternatives, Anatolii Kasianov, CTO of My Drama, a vertical video platform for unique and original short dramas and films, recommends: DistilBERT, TinyBERT, ALBERT, GPT-Neo (smaller versions), and FastText.At the end of the day, the right LLM or SLM depends entirely on the needs of your projects or tasks. Its also prudent to remember that Generative AI doesnt have to be the hammer for every nail, says Sean Falconer, AI entrepreneur in residence at Confluent.Read more about:Cost of AIAbout the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comentários ·0 Compartilhamentos ·89 Visualizações
  • Risk Leaders: Follow These 4 Strategies When Transitioning To Continuous Risk Management
    www.informationweek.com
    Cody Scott, Senior Analyst, ForresterFebruary 20, 20255 Min ReadParadee Kietsirikul via Alamy StockYour organizations single biggest risk is an ineffective risk management program. Organizations tend to focus on compliance objectives while inadvertently undervaluing or deprioritizing risks that could have significant impacts for many reasons. Compliance goals are prescriptive, with concrete actions to accomplish, making compliance a generally straightforward activity. Risk, on the other hand, is dynamic and complex.During the early 2000s, some of the largest financial scandals (Enron, WorldCom, Tyco) rocked the business world to its core, unleashing a new regulatory wave of corporate governance and internal controls requirements. In its wake, the three lines of defense (3LOD) were born. And when the Institute of Internal Auditors picked it up 10 years later, the industry branded and prescribed 3LOD as the cure to poor risk management. Yet, like prescription drugs, regulatory support doesnt guarantee effectiveness.Enter a More Modern Risk Approach: Continuous Risk ManagementHeres where we need the right prescription for managing risk. Continuous risk management is a modern approach to ensure that organizations not only take on the right risks in support of their strategic direction but also follow a holistic process to bring risk-based planning and mitigation oversight into the value chain -- a significant gap in the 3LOD approach and in most risk programs today. Continuous risk management unites the businesss strategic and operational sides under a common goal -- a pursuit of value -- and formalizes a process, key decision points, and opportunities to change course as project conditions and risk tolerances change over time.Related:Continuous risk management as a model has two main components:The first loop (identify, plan, analyze, and design) emphasizes strategic planning and the role of leaders in defining the pursuit of value to which risk and compliance projects will be aligned, ensuring that the pursuit is successful.The second loop (implement, respond, measure, and monitor) highlights the implementation work that control owners and operations teams perform to keep the pursuit of value on track and optimize mitigation strategies as new risks unfold. Importantly, the model features key inflection points as teams cycle through both loops that allow them to reevaluate decisions and escalate issues accordingly.Keys To Getting Continuous Risk Management RightFor organizations to get to continuous risk management, they must do these four things:1. Use the 3LOD model the right way to define roles and ensure segregation of duties. Contrary to popular belief, 3LOD is not a regulatory requirement. If your organization has adopted 3LOD for segregation of duties, you dont need to abandon it. Instead, use 3LOD for its intended purpose: to appropriately define roles and responsibilities. Use the model in combination with the 3LOD to answer the following: What work do we need to do? How should we do it? Who should be involved in the process?Related:2. Use the continuous risk model to identify gaps in your existing program and create a roadmap to improve the supporting processes, skills, and technology needed. Fortunately, you dont need to start from scratch to get to continuous risk management, as many pieces are likely already in place. For example, an organizations project management office might operate separately from its enterprise risk and compliance program, indicating a process and communication gap across multiple phases. A security program might operate an extensive tech stack but hasnt integrated the outputs to automatically measure and monitor the effectiveness of controls. Align the continuous risk management phases to your program, document how your current processes support these phases today, and prioritize pain points or disconnects that inhibit any phase.Related:3. Focus on the pursuit of value. A value is any goal, objective, regulatory requirement, or business outcome that the organization decides to pursue, such as acquiring a new company, entering a new market, or targeting a new customer segment. Value can be operational, like updating an internal process, changing critical suppliers, or maturing existing operational requirements. Value can also come from a technology initiative, such as launching a new application or service or modernizing legacy technology systems. Anchor risk management alongside and throughout the pursuit of value to establish the appropriate context, evaluate trade-offs, and support decision-making that accelerates, rather than impedes, growth, innovation, and resilience.4. Use the inflection points in the model as opportunities to accelerate governance reviews and approvals. When organizations plan a mitigation project, they might use an assessment to secure budget approval, but at this point, leaders and mitigation owners disconnect, assuming that theyll be informed if the effort is derailed. This reinforces a sunk cost scenario where controls are implemented with little regard to changing strategic or tactical situations until the end of the effort. Use the first infection point to decide which risks will be accepted or transferred -- and which will be controlled and mitigated throughout the lifecycle. Use the change management inflection point for ongoing feedback or to course-correct. Combined, the initial risk decision and ongoing change management ensure tight collaboration between stakeholders, provide assurance that the organization is managing risk acceptably, and confirm that mitigation and compliance activities fully align with the pursuit of value.Continuous risk management is conceptually simple yet requires organizations to interrogate their existing risk practices. This means thinking about which practices work well, which ones are lacking, which ones create unnecessary friction, and how technology can shift risk management to the left to accelerate business outcomes. Leave the side effects of poor risk management in the past and transform your program with a proactive solution.About the AuthorCody ScottSenior Analyst, ForresterCody is a senior analyst at Forrester covering cyber risk management with a focus on cyber risk quantification (CRQ), enterprise risk management (ERM), and governance, risk, and compliance (GRC). Prior to Forrester, Cody served as the first chief cybersecurity risk officer of the National Aeronautics and Space Administration (NASA). He holds a BA in international affairs from the George Washington University and is also a certified expert risk management framework professional.See more from Cody ScottNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comentários ·0 Compartilhamentos ·85 Visualizações
  • Gigantic star has gone through a rapid transformation and may explode
    www.newscientist.com
    An artists impression of the star WOH G64ESO/L. CaladaOne of the largest stars in the known universe is undergoing a strangely rapid transformation and may soon explode as a supernova.First catalogued in 1981, WOH G64 sits some 160,000 light years from Earth in the Large Magellanic Cloud, a small satellite galaxy of the Milky Way. It is one of the biggest red supergiants, the largest stars we know of. These are massive, cool stars that have run out of hydrogen fuel in their core and instead burn an envelope of hydrogen gas that surrounds them.
    0 Comentários ·0 Compartilhamentos ·98 Visualizações
  • Why I'm deeply sceptical about comparisons between humans and machines
    www.newscientist.com
    Comment and TechnologyHumans learn very differently to machines, thanks to our biased, malleable memory and that's a good thing, says Charan Ranganath, director of the Dynamic Memory Lab at the University of California, Davis 19 February 2025 Adri VoltArtificial intelligence has humans beat at least when it comes to games like chess and Go, identifying the 3D structure of proteins, generating investment strategiesthe list goes on and on. Some argue that models like ChatGPT are already at the threshold of human intelligence. OpenAI head Sam Altman even threw his unborn child under the bus, claiming my kid is never gonna grow up being smarter than AI.The capabilities of modern AI are certainly impressive, but I am deeply sceptical about comparisons between humans and machines. AI (at present and in the foreseeable future) isnt all that smart, or
    0 Comentários ·0 Compartilhamentos ·95 Visualizações
  • Whats driving electricity demand? It isnt just AI and data centers.
    www.technologyreview.com
    This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. Electricity demand rose by 4.3% in 2024 and will continue to grow at close to 4% annually through 2027, according to a new report from the International Energy Agency. If that sounds familiar, it may be because theres been a constant stream of headlines about energy demand recently, largely because of the influx of data centersespecially those needed to power the AI thats spreading seemingly everywhere. These technologies are sucking up more power from the grid, but theyre just a small part of a much larger story. Whats actually behind this demand growth is complicated. Much of the increase comes from China, India, and Southeast Asia. Air-conditioning, electric vehicles, and factories all play a role. And of course, we cant entirely discount the data centers. Here are a few key things to know about global electricity in 2025, and where things are going next. China, India, and Southeast Asia are the ones to watch. Between now and 2027, about 85% of electricity demand growth is expected to come from developing and emerging economies. China is an especially major force, having accounted for over half of global electricity demand growth last year. The influence of even individual sectors in China is staggering. For example, in 2024, about 300 terawatt-hours worth of electricity was used just to produce solar modules, batteries, and electric vehicles. Thats as much electricity as Italy uses in a year. And this sector is growing quickly. A boom in heavy industry, an increase in the number of air conditioners, and a robust electric-vehicle market are all adding to Chinas power demand. India and Southeast Asia are also going to have above-average increases in demand, driven by economic growth and increased adoption of air conditioners. And theres a lot of growth yet to come, as 600 million people across Africa still dont have access to reliable electricity. Data centers are a somewhat minor factor globally, but they cant be counted out. According to another IEA projection published last year, data centers are expected to account for less than 10% of global electricity demand growth between now and 2030. Thats less than the expected growth due to other contributors like electric vehicles, air conditioners, and heavy industry. However, data centers are a major storyline for advanced economies like the US and many countries in Europe. As a group, these nations have largely seen flat or declining electricity demand for the last 15 years, in part because of efficiency improvements. Data centers are reversing that trend. Take the US, for example. The IEA report points to other research showing that the 10 states hosting the most data center growth saw a 10% increase in electricity demand between 2019 and 2023. Demand in the other 40 states declined by about 3% over the same period. One caveat here is that nobody knows for sure whats going to happen with data centers in the future, particularly those needed to run AI. Projections are all over the place, and small changes could drastically alter the amount of energy required for the technology. (See the DeepSeek drama.) One bit I found interesting here is that China could see data centers emerge as yet another source of growing electricity demand in the future, with demand projected to double between now and 2027 (though, again, its all quite uncertain). What this all means for climate change is complicated. Growth in electricity demand can be seen as a good thing for our climate. Using a heat pump rather than a natural-gas heating system can help reduce emissions even as it increases electricity use. But as we add demand to the grid, its important to remember that in many places, its still largely reliant on fossil fuels. The good news in all this is that theres enough expansion in renewable and low-emissions electricity sources to cover the growth in demand. The rapid deployment of solar power alone contributes enough energy to cover half the demand growth expected through 2027. Nuclear power is also expected to see new heights soon, with recovery in France, restarts in Japan, and new reactors in China and India adding to a stronger global industry. However, just adding renewables to meet electricity demand doesnt automatically pull fossil fuels off the grid; existing coal and natural-gas plants are still chugging along all over the world. To make a dent in emissions, low-carbon sources need to grow fast enough not only to meet new demand, but to replace existing dirtier sources. It isnt inherently bad that the grid is growing. More people having air-conditioning and more factories making solar panels are all firmly in the positive column, Id argue. But keeping up with this breakneck pace of demand growth is going to be a challengeone that could have major effects on our ability to cut emissions. Now read the rest of The Spark Related reading Transmission equipment is key to getting more power to more people. Heres why one developer wont quit fighting to connect US grids, as reported by my colleague James Temple. Virtual power plants could help meet growing electricity demand for EVs in China, as Zeyi Yang lays out in this story. Power demand from data centers is rising, and so are emissions. Theyre set to climb even higher, as James ODonnell explains in this story from December. STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW Another thing Competition is stiff in Chinas EV market, so some automakers are pivoting to humanoid robots. With profit margins dropping for electrified vehicles, financial necessity is driving creativity, as my new colleague Caiwei Chen explains in her latest story. Keeping up with climate The Trump administration has frozen funds and set hiring restrictions, and that could leave the US vulnerable to wildfire. (ProPublica) US tariffs on imported steel and aluminum are set to go into effect next month, and they could be a problem for key grid equipment. The metals are used in transformers, which are in short supply. (Heatmap) A maker of alternative jet fuel will get access to a $1.44 billion loan it was promised earlier this year. The Trump administration is exploring canceling promised financing, but this loan went ahead after a local representative pressured the White House. (Canary Media) A third-generation oil and gas worker has pivoted to focus on drilling for geothermal systems. This Q&A is a fascinating look at what it might look like for more workers to move from fossil fuels to renewables. (Inside Climate News) The Trump administration is working to fast-track hundreds of fossil-fuel projects. The US Army Corps of Engineers is speeding up permits using an emergency designation. (New York Times) Japans government is adopting new climate targets. The country aims to cut greenhouse-gas emissions by more than 70% from 2013 levels over the next 15 years and reach net zero by 2050. Expansion of renewables and nuclear power will be key in the plan. (Associated Press) A funding freeze has caused a whole lot of confusion about the state of federal financing for EV chargers in the US. But theres still progress on building chargers, both from government funds already committed and from the private sector. (Wired) The US National Oceanic and Atmospheric Administration (NOAA) is the latest target of the Trump administrations cuts. NOAA provides weather forecasts, and private industry is reliant on the agencys data. (Bloomberg)
    0 Comentários ·0 Compartilhamentos ·91 Visualizações