0 Comentários
0 Compartilhamentos
0 Anterior
Diretório
Diretório
-
Faça o login para curtir, compartilhar e comentar!
-
WWW.TECHSPOT.COMSteam survey sees a shake-up as a new top graphics card is revealedWhat just happened? There's a new top graphics card on the Steam survey. After seeing a comparatively huge 4% increase in users, the RTX 4060 has replaced the RTX 3060 as the most popular GPU among participants of Valve's survey. There was also an unexpected 5% jump in the number of people using Intel CPUs, which points to February being another one of those months where the results were anomalous. February saw both the user share for the RTX 4060 and RTX 4060 Ti jump by 3.97% and 3.11%, respectively. That's a huge increase compared to the less than 1% changes we usually see each month.The rest of the top 12 performers of the month are made up of variants of RTX xx60 and RTX xx70 cards from the Lovelace, Ampere, and Turing generations.Best-performing GPUs among Steam survey participants during FebruaryThe RTX 4060 has been catching up to the RTX 3060 for a while now, so it's not too surprising that it's taken the lead, but leapfrogging its predecessor with a 4% increase is unusual. As with the top performers, the most popular cards are made of xx60 and xx70 GPUs.February also saw the overall number of Nvidia GPUs on the table increase. Team Green now accounts for 83% of products on the list. AMD has 11.5%, and Intel has 5.2%. // Related StoriesMost popular GPUs among Steam survey participants during FebruaryAnother strange result was in the CPU section. AMD has spent months eroding Intel's lead, with Team Red hitting a record 36.19% share in January. But February saw AMD fall 5% as Intel rose by the same amount a contrast to what we've seen in the retail space this year.In further evidence that this is one of those weird Steam survey months, Windows 10, which had fallen below Windows 11 as the most-used OS among participants, suddenly retook the top spot after its share skyrocketed by 10.5% as Windows 11 dropped just over 9%.Windows 10 reaches its end of support date on October 14, 2025. According to Statcounter, its global user share has dropped over the last two months, from 62.7% in December to 58.7% in February, while Windows 11 has seen its share climb.Elsewhere on Valve's survey, 32GB suddenly became the most popular amount of VRAM following a 13.7% gain and those using 16GB fell by 8%. There's also a new most-popular language, Simplified Chinese, which saw its usage go up 20% to take a 50% overall share as English fell 10%.There was similar strangeness in the survey results back in October 2023, with unusually large changes in a lot of categories, including Chinese going up almost 14%. Things returned to normal a month later, so March's survey could look very different.0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.TECHSPOT.COMFormer Intel CEO has a radical solution for the company: Fire the board and rehire Pat GelsingerA hot potato: Craig Barrett is firing shots at Intel's board over its proposal to break the company up into multiple smaller pieces and sell parts of the business to TSMC. The former Intel CEO called it the "dumbest idea around" that would squander the "accomplishments" made under Pat Gelsinger's leadership. Barrett, who ran Intel from 1998 to 2005, didn't mince words in his opinion piece published on Fortune, where he expressed a starkly different take. He says the only viable path forward for the company is to stay unified and double down on its latest 18A process node and imaging technologies like high NA EUV lithography.Perhaps even more radically, Barrett contends that Gelsinger should be brought back. That's because under the ousted CEO, Intel finally regained technical parity with TSMC at the 2nm node after years of stagnation, according to Barrett."Pat Gelsinger did a great job at resuscitating the technology development team," Barrett wrote, highlighting Intel's lead in novel areas like backside power delivery in addition to the 18A process itself. He added that a better move over simply breaking the company down might be to fire the board and rehire Gelsinger to "finish the job he has aptly handled over the past few years."The critique pulls no punches against the "well-meaning but off target" current Intel board members. He compared them in a seemingly sarcastic way to "two academics and two former government bureaucrats just the type of folks you want dictating strategy in the ruggedly competitive semiconductor industry."Barrett went so far as to place the blame for Intel's poor performance squarely on the shoulders of the board members, saying "they bear ultimate responsibility for what has happened to Intel over the last decade." // Related StoriesWhere Intel faltered in the past, per Barrett, was its outdated fabrication technologies. But now, with 18A bringing Intel's foundry ops up to speed, a split would only "introduce complications" rather than solve anything. Instead, he advises Intel to focus on "good customer service, fair pricing, guaranteed capacity, and a clear separation of chip designers from their foundry customers."While he opposes breaking Intel up entirely, Barrett does support splitting the company into a design firm and a separate foundry, as long as the foundry is not sold.Barrett signs off by noting that his criticism of Intel being split up stems from understanding "the intricacies of the semiconductor industry." He derides the plan as a "simplistic solution" that ignores just how difficult and time-consuming it is to develop and ramp leading-edge manufacturing tech."It takes years to develop a new semiconductor manufacturing technology and ramp it into volume production. Intel is about to regain its leadership in this area, and the dumbest idea around is to stall that from happening by slicing the company into pieces," he declared.0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.DIGITALTRENDS.COMSee the first images of the Blue Ghost lander on the surface of the moonWith the arrival of the Blue Ghost lander on the moon this weekend, get ready for an influx of stunning new images from our planets natural satellite. The mission, from Firefly Aerospace, touched down in the moons Mare Crisium region yesterday, Sunday March 2, and the company has already shared the first images captured by the lander from its new home.As well as a striking image showing the shadow of the lander on the moons surface, seen above, another image shows the lander on the moon with the Earth visible in the night sky:This image shows the Moons surface, Earth on the horizon, and Blue Ghosts top deck with its solar panel, X-band antenna (left), and LEXI payload (right) in the view. Firefly AerospaceThis also shows two of the landers instruments, the X-band antenna for sending data back to Earth and the Lunar Environment Heliospheric X-ray Imager or LEXI telescope. This instrument will use X-rays to study how solar winds interact with the Earths magnetic field, and is one of several NASA instruments on board the lander.Recommended VideosRegarding the landing, NASA acting Administrator Janet Petro said in a statement: This incredible achievement demonstrates how NASA and American companies are leading the way in space exploration for the benefit of all. We have already learned many lessons and the technological and science demonstrations onboard Fireflys Blue Ghost Mission 1 will improve our ability to not only discover more science, but to ensure the safety of our spacecraft instruments for future human exploration both in the short term and long term.Please enable Javascript to view this contentA further image was also released, showing a top-down view of the surface with the landers thrusters visible as well:The image shows the Moons surface and a top-down view of the landers RCS thrusters (center) with a sun glare on the right side. Firefly AerospaceDeployment of the landers instrument has already begun, and today Firefly announced that the X-band antenna has been fully deployed. Compared to the landers S-band antennae, which are used to send lower quality images, the X-band antenna will allow higher quality images, science data, and even video to be sent back from the surface.The science and technology we send to the Moon now helps prepare the way for future NASA exploration and long-term human presence to inspire the world for generations to come, said Nicky Fox, NASAs associate administrator for science. Were sending these payloads by working with American companies which supports a growing lunar economy.Editors Recommendations0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.DIGITALTRENDS.COMNvidias sub-$350 GPU is now the most popular card on SteamNvidias RTX 4060 has officially become the most widely used graphics card among gamers on Steam, thanks to its affordable price and solid performance for 1080p gaming. According to the latest Steam Hardware and Software Survey, the budget-friendly GPU has steadily gained traction since its mid-2023 launch, appealing to casual gamers, esports players, and budget-conscious PC builders.For years, older budget GPUs like the GTX 1650 and RTX 3060 dominated Steams charts. However, the RTX 4060 has now surpassed both, securing the top position with an 8.57% market share in February 2025. Its rise can be attributed to competitive pricing (around $300-$350), low power consumption, and modern gaming features like DLSS 3 and ray tracing support.Recommended VideosCompared to its predecessor, the RTX 3060, the RTX 4060 offers improvements in ray tracing, DLSS 3 frame generation, and overall efficiency. While some criticized its 8GB VRAM and narrower memory bus, it remains a solid choice for 1080p gaming, which aligns with the majority of Steam users setups. At the time of writing, the RTX 4060 is available anywhere from $300 to $350, which is similar to the RTX 3060.ValveThe RTX 3060, previously a dominant choice, now holds 6.87% of the market, reflecting a 1.67% increase from the previous month. The RTX 4060 Ti has also seen significant growth, rising by 3.11% to reach a 6.56% share. Similarly, the RTX 4070 experienced a 2.54% increase, bringing its total to 5.43%.Get your weekly teardown of the tech behind PC gaming The latest survey results highlight Nvidias overwhelming control of the PC gaming GPU market. The company occupies nearly all of the top spots, with AMD and Intel struggling to make significant gains in the consumer segment. Even as Nvidia moves forward with its RTX 50-series launch later this year, the affordability and accessibility of the RTX 4060 keep it relevant for budget-conscious gamers.As newer graphics cards hit the market, including the RTX 50-series and AMDs Radeon 9000 range, it will be interesting to see if the RTX 4060 can maintain its lead or if another mid-range option will dethrone it in the coming months.Editors Recommendations0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.WSJ.COMGoogles New Tech Means Video Calls May Not Be the Death of Us After AllGoogle and HPs videoconferencing platform, Project Starline, aims to make virtual meetings feel more like in-person interactions. Illustration: Thomas R. LechleiterGoogle and HP are scheduled to release this year a 3-D video communications platform that works without requiring users to wear glasses or a headset, an effort to infuse virtual meetings with a greater sense that people are together in the same space.Video calls famously turned heel in the past few years, transforming from a panacea of the early pandemic into a soul-sapping burden for workers. Alphabets Google and HP think Project Starline is a breakthrough sufficient to take virtual communications to the next level. And based on a shockingly visceral remote conversation I just had at HPs headquarters in Palo Alto, Calif., Id say they are on to something.0 Comentários 0 Compartilhamentos 0 Anterior
-
ARSTECHNICA.COMAI versus the brain and the race for general intelligenceIntelligence, artificial AI versus the brain and the race for general intelligence We already have an example of general intelligence, and it doesn't look like AI. John Timmer Mar 3, 2025 7:00 am | 3 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThere's no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That's gotten some people talking about the possibility that we're on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.Many arguments come down to the question of how AGI is defined, which people in the field can't seem to agree upon. This contributes to estimates of its advent that range from "it's practically here" to "we'll never achieve it." Given that range, it's impossible to provide any sort of informed perspective on how close we are.But we do have an existing example of AGI without the "A"the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It's entirely possible that there's more than one way to reach intelligence, depending on how it's defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.With all that in mind, let's look at some of the things the brain does that current AI systems can't.Defining AGI might helpArtificial general intelligence hasn't really been defined. Those who argue that it's imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI's arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the "G" of AGI and its implication of systems that are far less specialized.But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short."I think that AGI would be something that is going to be more robust, more stablenot necessarily smarter in general but more coherent in its abilities," said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. "You'd expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.""I think that's a big distinction, this idea of generalizability," echoed neuroscientist Christa Baker of NC State University. "You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it's not like now you're an idiot."Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.Beyond those specific limits, Baker noted that "there's long been this very human-centric idea of intelligence that only humans are intelligent." That's fallen away within the scientific community as we've studied more about animal behavior. But there's still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language modelsThe fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.These capabilities are complicated enough that it's not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we've created so far.Neurons vs. artificial neuronsMost current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.While that system is modeled on the behavior of some structures within the brain, it's a very limited approximation. For one, all artificial neurons are functionally equivalentthere's no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.Finally, while organized layers are a feature of a few structures in brains, they're far from the rule. "What we found is it'sat least in the flymuch more interconnected," Baker told Ars. "You can't really identify this strictly hierarchical network."With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are "finding lateral connections or feedback projections, or what we call recurrent loops, where we've got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate."While we're only beginning to understand the functional consequences of all this complexity, it's safe to say that it allows networks composed of actual neurons far more flexibility in how they process informationa flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we've talked about so far. They extend to significant differences in how these functional units are organized.The brain isnt monolithicThe neural networks we've generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.To give a sense of what this looks like, let's think about what's going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.Separately, there's part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you're engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehensionand requiring many of these systems to communicate among themselves.As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there's any emotional content to the material you're reading.All of these different areas are engaged without you being consciously aware of the need for them.In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That's in sharp contrast to a brain. "The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make," Baker noted. "There already a lot of constraints and specifics that are already set up."Even in cases where it's not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.In contrast, pre-planned modularity is relatively new to the AI world. In software development, "This concept of modularity is well established, so we have the whole methodology around it, how to manage it," Schain said, "it's really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain." There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.None of this is saying that a modular system can't arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there's no reason to think modularity will be valuable.There is some reason to believe that this modularity is key to the brain's incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that's not consistently the case; Baker noted that, "When you're talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech."This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we'll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.The brain is constantly trainingCurrent AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn't absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they're retained.That may be starting to change a bit, Schain said. "There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates," he told Ars. But it's still the case that neural networks are essentially useless without an extended training period.In contrast, a brain doesn't have distinct learning and active states; it's constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: "Once you have made your movement, the ball has left your hand, it's going to land somewhere. So that visual signalthat comparison of where it landed versus where you wanted it to gois what we call an error signal. That's detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time."It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). "Even if you're put into a situation where you've never been before, you can still figure it out," Baker said. "If you see a new object, you don't have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions."As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human's performance doesn't remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with "get off my lawn" would be indistinguishable.)Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.In contrast, it's essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they're presented as text. But here, there's still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it's best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.Dj vuFor Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, "memory" is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow "context window" that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on."For AI, it's very basic: It's like the memory is in the weights [of connections] or in the context. But with a human brain, it's a much more sophisticated mechanism, still to be uncovered. It's more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant."This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we've never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don't really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it's difficult to discuss at all. All we can really say is that there are clear differences there.Facing limitsIt's difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it's potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we'd find in a fly's brain and have nowhere near the fly's general capabilities.It remains possible that there is more than one route to those general capabilities and that some offshoot of today's AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we'll run into a serious roadblock: We don't fully understand the biology yet."I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has," Baker said. "That's just because we don't even know how it gets it; we don't know how that arises. So how do you build that into a system?"John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 3 Comments0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.INFORMATIONWEEK.COMHow to Create a Winning AI StrategyLisa Morgan, Freelance WriterMarch 3, 20258 Min ReadBrain light via Alamy StockArtificial intelligence continues to become more pervasive as organizations adopt it to gain a competitive advantage, reduce costs and deliver better customer experiences. All organizations have an AI strategy, whether by design or default. The former helps ensure the company is realizing greater value, simply because its leaders are putting more thought into it and working cross-functionally to make it happen, both strategically and tactically.Its very much back to the business, so what are the business objectives? And then within that, how can AI best help me achieve those objectives? says Anand Rao, distinguished service professor, applied data science and artificial intelligence atCarnegie Mellon University. From there, [it] pretty much breaks down into two things: AI automates tasks so that you can be more efficient, and it helps you make better decisions and with that comes a better customer experience, more revenue, or more consistent quality.Elements of a Winning AI StrategyKevin Surace, CEO at autonomous testing platform Appvance, says the three elements of an effective AI strategy are clarity, alignment, and agility.A winning AI strategy starts with a clear vision of what problems youre solving and why, says Surace. It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.Related:Will Rowlands-Rees, chief AI officer, at eLearning, AI services, and translation and localization solution provider Lionbridge agrees.It is critical to align your AI strategy and investments with your overall business strategy -- they cannot be divorced from each other, says Rowlands-Rees. When applied correctly, AI is a powerful tool that can accelerate your organizations ability to solve customer problems and streamline operations and therefore drive revenue growth. This offensive approach will organically lead to cost optimization as efficiencies emerge from streamlined processes and improved outcomes.Brad O'Brien, partner at global consultancy Baringa's US Financial Services practice, advocates having a clear governance framework including the definition of roles and responsibilities, setting guiding principles, and ensuring accountability at all levels.Comprehensive risk management practices are essential to identify, assess, and mitigate AI-related risks, including regular audits, bias assessments and robust data governance, says OBrien. Staying informed about, and compliant with, evolving AI regulations, such as the EU AI Act and emerging US regulations, is vital. Maintaining transparency and thorough documentation of the entire AI lifecycle builds trust with stakeholders. Engaging key stakeholders, including board members, employees and external partners, ensures alignment and support for AI initiatives. Continuous improvement, based on feedback, new data and technological advancements, is also a critical component.Related:Ashwin Rajeeva, co-founder and CTO at enterprise data observability company Acceldata, believes a successful AI strategy blends a clear business vision with technical excellence.It starts with a strong data foundation; reliable, high-quality data is non-negotiable. Scalability and adaptability are also critical as AI technologies evolve rapidly, says Rajeeva. Ethical considerations must be embedded early, ensuring transparency and fairness in AI outcomes. Most importantly, it should create tangible business value while maintaining the flexibility to adapt to future innovations.How to Avoid Common MistakesOne mistake is assuming that generative AI replaces other forms of AI. Thats incorrect because traditional types of AI -- such as computer vision, predictions, and recommendations -- use different types of models.Related:You still need to look at your use cases and standard methods. Look across the organization, look at the value chain elements, and then look at where traditional AI works and where generative AI would work, and what some of the more agent kind of stuff would work, says CMUs Rao. Then, essentially start pulling all of the use cases together and have some method of prioritizing.The accelerating rate at which AI technology is advancing is also having an effect because companies cant keep up, so organizations are questioning whether they should buy, build or wait.Change with respect to AI, and especially Gen AI, is moving very fast. Its moving so much faster that even the technology companies can keep pace, says Rao.AI is also not a solution to all problems. Like any other technology, its simply a tool that needs to be understood and managed.Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience, says Lionbridges Rowlands-Rees. [E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- its the modern approach to running a business. The companies that dont embrace AI in some capacity will not be around in the future to prove everyone else wrong.Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting.Ensuring data privacy and security is another major challenge, as organizations must protect sensitive data used by AI systems and comply with privacy laws. Mitigating biases in AI models to prevent unfair treatment and ensure compliance with anti-discrimination laws is also critical, says Baringa's OBrien. Additionally, the 'black box' nature of AI systems poses challenges in providing clear explanations of AI decisions to stakeholders and regulators. Allocating sufficient resources, including skilled personnel and financial investment, is necessary to support AI initiatives.In his view, common mistakes in AI strategy implementation include:A lack of clear governance frameworks and accountability structures.Insufficient risk management practices, such as overlooking comprehensive risk assessments and bias mitigation.Poor data management, including neglecting data privacy and security that can lead to potential breaches and regulatory non-compliance.Inadequate transparency in documenting and explaining AI processes results in a lack of trust among stakeholders.Underestimating resource needs, such as not allocating sufficient skilled personnel and financial investment, can hinder AI initiatives.Encountering resistance from employees and stakeholders who hesitate to embrace AI technologies is a common challenge.[P]rioritize governance by establishing clear frameworks and ensuring accountability at all levels. Stay informed about evolving AI regulations and ensure compliance with all relevant standards, says OBrien. Focus on transparency by maintaining thorough documentation of AI processes and decisions to build trust with stakeholders. Invest in regular training for employees on AI policies, risk management, and ethical considerations. Engage key stakeholders in the design and implementation of AI initiatives to ensure alignment and support. Finally, embrace continuous improvement by regularly updating and refining AI models and strategies based on feedback, new data and technological advancements.One of the biggest mistakes Shobhit Varshney, VP and senior partner, Americas AI leader, IBM Consulting has observed organizations selecting AI use cases based on speed of implementation rather than properly articulated business impact.Many organizations adopt AI because they want to stay competitive, but they fail to realize that they aren't focusing on the use cases that will create significant long-term value. It's common to start with simple, easy-to-automate tasks, but this approach can be limiting, says Varshney. Instead, organizations should focus on areas where AI can have the greatest impact and have enough instrumentation to capture metrics and continuously iterate and evolve the solution. The best starting point for AI use cases is unique to each business and its important to identify areas within the organization that could benefit from improvement.He also says an all-too-common mistake is automating an existing process.We need to rethink workflows to truly unlock the power of these exponential technologies. As we evolve to agentic AI, we need to ensure that we rethink the optimal way to delegate specific tasks to agents and play to the strengths of humans and AI, says Varshney.Jim Palmer, chief AI officer at AI-native business and customer communications platform Dialpad, says a common challenge is ensuring AI models have access to accurate, up-to-date data and can seamlessly integrate with existing workflows.Theres a gap between AIs theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility, Palmer says.Bhadresh Patel, COO of global professional services firm RGP thinks one of the biggest challenges organizations is the significant gap between ideation and execution.We often see organizations set up an AI function and expect miracles, but this approach simply doesn't work. This is why it's important to prioritize the pockets of use cases where AI can have the biggest impact on the business, says Patel. Another challenge organizations often face is when functional people do not take the time to understand the capabilities and limitations of the tools they have at their disposal. Leaders must understand why theyre making new AI investments and what the overlap is in terms of existing capabilities, training and user knowledge.Acceldatas Rajeeva says organizations often grapple with fragmented or poor-quality data, which undermines AI outcomes.Scaling AI initiatives from proof of concept to enterprise-wide deployment can be daunting, especially without robust operational frameworks. Additionally, balancing innovation with regulatory and ethical standards is challenging. A lack of skilled talent and clear success metrics further complicates these efforts, says Rajeeva. One significant misstep is treating AI as a technology-first initiative, ignoring the importance of data quality and infrastructure. Organizations sometimes over-invest in sophisticated models without aligning them with practical business goals. Another common mistake is failing to plan for scaling AI, leading to operational bottlenecks. Finally, insufficient monitoring often results in biased or unreliable AI systems.And remember, foresight and agility are more valuable than 20-20 hindsight.Start with the end in mind. Define success metrics before you write a single line of code. Build cross-functional teams that can bridge the gap between business and technology, says Appvances Surace. And remember, an AI strategy isnt static -- its a living, evolving framework that should grow with your organization and its goals.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.BUSINESSINSIDER.COMFrontier Airlines passenger who punched window and was restrained by crew and other fliers is chargedA Frontier passenger started punching the seat in front of him and a window, an FBI affidavit stated.Raul Ramos Tamayo was restrained by crew and other passengers on the flight from Dever to Houston.Police met the flight when it landed and he could face a prison sentence or fine if convicted.A Frontier Airlines passenger could be given a prison sentence after being restrained midflight by other travelers last month.Raul Ramos Tamayo, 31, was on Frontier flight 4856 from Denver to Houston, per an affidavit from an FBI special agent.About 30 minutes after takeoff, he started punching the seat in front of him, witnesses are said to have told the FBI.After crew members approached him, Tamayo is alleged to have started punching a window, resulting in damage.The affidavit stated that the cabin crew then asked for help from any law enforcement officers or able-bodied passengers, several of whom helped restrain Tamayo with flex cuffs around his wrists and ankles.Tamayo was then put back into a seat and surrounded by the passengers who subdued him for the rest of the 2-hour flight, it added.Officers from the Houston Police Department met the flight at the gate at George Bush Intercontinental Airport.Tamayo was charged with destruction of aircraft or aircraft facilities. If convicted, he could face a lengthy prison sentence and a fine of up to $250,000.The FBI affidavit cited an internal airline report that said the damage included a cracked window, broken window shade, and broken outer lining of the window. The total cost of the damage was estimated at $1,546."Based on my experience as a Special Agent, I know that a passenger on an aircraft must not cause damage to the aircraft, especially when the aircraft is in flight," the affidavit read.While cases of unruly passengers remain above pre-pandemic levels, it isn't clear that they always result in prosecutions.Some airlines are taking more legal actions themselves. Ryanair, Europe's biggest airline, said in January it was suing a passenger for about $15,500 because the individual caused a flight to divert."When the public flies, they need to feel confident that they are doing so under safe conditions," said Nicholas J. Ganjei, US Attorney for the Southern District of Texas."Given the fact that greater Houston has two major international airports, with tens of millions of travelers a year, the Southern District of Texas is always ready to prosecute those that endanger the safety of passengers."0 Comentários 0 Compartilhamentos 0 Anterior
-
WWW.BUSINESSINSIDER.COMI was unemployed for 300 days. I often had sleepless nights and panic attacks because of it.Mekela Watt is a 29-year-old in Bermuda Dunes, California, who was laid off in April 2024.For 300 days, she was unemployed, worried about how she would pay for food, rent, and bills.She has just started a new job and shares what she most looks forward to.This as-told-to essay is based on a conversation with Mekela Watt. It has been edited for length and clarity.I worked as a temp client services associate at a global music company for nearly two years when I got laid off. Since I was a temp, I didn't get any severance or benefits, but in California, temp workers qualify for six months of unemployment.That first month of unemployment was a huge relief. I had hated that job just logging on was triggering. But by the second month, although I was glad not to work a job I hated, I started worrying about how I would pay my bills.I was unemployed for 300 days before finally securing a job in February 2025 as an Administrative Coordinator. I'm looking forward to these things with my new job.Not worrying about billsI have multiple health issues and payments for those issues I've had to ignore over the past 10 months. For example, right before I was laid off, I had an MRI that cost $500, and I haven't been able to pay that bill yet.Rent was another expense we had to worry about. I thought we would be evicted from our house several times, especially when my husband wasn't working. Living with this fear kept me up at night. If we were evicted, we would have no housing options.Not having to rely on GoFundMe to stay afloatIn August 2024, after months of unemployment, I set up a GoFundMe. I started it after we received a notice that our rent was going up, and I wasn't sure how we would manage.It was embarrassing and a little defeating to set it up, but we had to survive. We received a little over $9,000, which significantly helped us from August to November.I've never lived paycheck to paycheck, so taking out most of my savings to pay for bills has been worrying. Our financial padding kept getting thinner and thinner. I knew that we were close to having nothing to fall back on.Not applying to jobsEven though I had done everything people told me to do when applying for jobs, spending hours tailoring my rsum for applications and trying to form a personal connection, I was still getting endless rejection letters.It made me question my worth and ability. I remember thinking if I was any good and asking myself if I was fired because of my quality of work.I can keep my side hustle, and my husband doesn't have to work overtimeFor years prior, I had resold pre-loved clothing. It was a fun little side hustle. When I tried to scale it to full time work while I was off a job, I realized it wasn't sustainable. It was no longer fun. I can keep doing it for fun now that I have a paycheck. I'm also excited that my husband doesn't have to work overtime for weeks on end. When our GoFundMe money ran out, my husband worked constantly to pay our bills and was always exhausted.I'm looking forward to being able to sleep again. For so long, I've fallen asleep, waking through the night worried about money.During my unemployment, I received many well-meaning platitudes from people, such as, "This is God's plan. He'll provide." None of it helped. I would have loved someone to tell me my worth wasn't tied to employment status and that surviving unemployment was proof of my resilience.I cannot wait to be thriving rather than just surviving.0 Comentários 0 Compartilhamentos 0 Anterior