• Adapting To A New Frontier: Why AI Agents Demand Rethinking Fraud Prevention
    www.forbes.com
    The rise of AI agents demands a proactive and adaptable approach to fraud preventionone that embraces advanced technologies without compromising user experience.
    0 Comments ·0 Shares ·29 Views
  • Thought-Provoking QR Code Trends And Best Practices
    www.forbes.com
    There is an awareness and effectiveness of QR codes as a bridge to digital information; but how can this change the usual customer experience?
    0 Comments ·0 Shares ·34 Views
  • Microsoft Plans to Urge Trump to Loosen up Restrictions on Chip Import
    techreport.com
    Key TakeawaysMicrosoft is planning to request Trump to ease up the restrictions imposed on chip export by Biden.It wants at least the allies of the US, such as India, Switzerland, and Israel, to be able to freely import US chips.However, as per reports, the Trump administration has been planning to strengthen these restrictions. So, a plea like this might not stand a chance.Microsoft has decided to urge President Donald Trump to ease the restrictions imposed on chip exports by the previous Joe Biden administration. The said restrictions were brought into place during the final days of Bidens tenure.Microsoft feels that at least the allies of the US should be spared of these restrictions. These countries include India, Israel, and Switzerland.Why Did Joe Biden Restrict Chip Imports?Joe Biden decided to impose more restrictions on the export of AI chips, especially those created by Nvidia. The initial purpose was to keep advanced AI computing powers within the US, with special focus on ensuring they dont end up in Chinas hands.However, the restriction soon expanded to other countries as well, which included some of the closest allies of the US.Biden thought that it would make the US a superpower in the AI race, but what he did not consider is that sooner or later, other countries would also figure out how to make equally good chips. And in that case, all the buyers that the US had would turn to those countries. This is exactly what Microsoft is worried about.By restricting exports, US chip makers are missing out on some of the biggest international markets, whereas China is taking up all the lost business right from under the USs nose.Left unchanged, the Biden rule will give China a strategic advantage in spreading over time its own AI technology, echoing its rapid ascent in 5G telecommunications a decade ago MicrosoftHowever, according to the Wall Street Journal, which first published this news, the Trump administration is actually planning to strengthen these restrictions. So, whether a plea to ease them up will work or not is hard to say.Should the US Really Restrict Chip Export?The biggest reason the US started restricting chip exports was to keep such advanced technology out of Chinas grasp. However, as mentioned earlier, China is an equally adept country when it comes to tech. So, if the US wont provide it with chips, its capable of creating its own.Look at DeepSeek, for instance. US AI startup OpenAI was one of the first companies to popularize the idea of GenAI, but within just a couple of years, China managed to create something better, and that too, at a fraction of the cost.Similarly, China might soon reach the USs chip prowess. The only ones that will actually lose in the process are the US chip companies, who will now have a limited customer base.Add Techreport to Your Google News Feed Get the latest updates, trends, and insights delivered straight to your fingertips. Subscribe now! Subscribe now Vlad is Techreport's in-house Executive Editor. With over a decade of experience in tech content, he's passionate about computer hardware, an advocate of online privacy, and strongly believes in the open-source, scarce-money nature of cryptocurrency. When hes not working, hes traveling with his partner and their cat, learning Python, or reading good books. He never owned a PC he did not build. View all articles by Vlad Melnic Our editorial processThe Tech Reporteditorial policyis centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written byreal authors.
    0 Comments ·0 Shares ·38 Views
  • Steam survey sees a shake-up as a new top graphics card is revealed
    www.techspot.com
    What just happened? There's a new top graphics card on the Steam survey. After seeing a comparatively huge 4% increase in users, the RTX 4060 has replaced the RTX 3060 as the most popular GPU among participants of Valve's survey. There was also an unexpected 5% jump in the number of people using Intel CPUs, which points to February being another one of those months where the results were anomalous. February saw both the user share for the RTX 4060 and RTX 4060 Ti jump by 3.97% and 3.11%, respectively. That's a huge increase compared to the less than 1% changes we usually see each month.The rest of the top 12 performers of the month are made up of variants of RTX xx60 and RTX xx70 cards from the Lovelace, Ampere, and Turing generations.Best-performing GPUs among Steam survey participants during FebruaryThe RTX 4060 has been catching up to the RTX 3060 for a while now, so it's not too surprising that it's taken the lead, but leapfrogging its predecessor with a 4% increase is unusual. As with the top performers, the most popular cards are made of xx60 and xx70 GPUs.February also saw the overall number of Nvidia GPUs on the table increase. Team Green now accounts for 83% of products on the list. AMD has 11.5%, and Intel has 5.2%. // Related StoriesMost popular GPUs among Steam survey participants during FebruaryAnother strange result was in the CPU section. AMD has spent months eroding Intel's lead, with Team Red hitting a record 36.19% share in January. But February saw AMD fall 5% as Intel rose by the same amount a contrast to what we've seen in the retail space this year.In further evidence that this is one of those weird Steam survey months, Windows 10, which had fallen below Windows 11 as the most-used OS among participants, suddenly retook the top spot after its share skyrocketed by 10.5% as Windows 11 dropped just over 9%.Windows 10 reaches its end of support date on October 14, 2025. According to Statcounter, its global user share has dropped over the last two months, from 62.7% in December to 58.7% in February, while Windows 11 has seen its share climb.Elsewhere on Valve's survey, 32GB suddenly became the most popular amount of VRAM following a 13.7% gain and those using 16GB fell by 8%. There's also a new most-popular language, Simplified Chinese, which saw its usage go up 20% to take a 50% overall share as English fell 10%.There was similar strangeness in the survey results back in October 2023, with unusually large changes in a lot of categories, including Chinese going up almost 14%. Things returned to normal a month later, so March's survey could look very different.
    0 Comments ·0 Shares ·42 Views
  • Former Intel CEO has a radical solution for the company: Fire the board and rehire Pat Gelsinger
    www.techspot.com
    A hot potato: Craig Barrett is firing shots at Intel's board over its proposal to break the company up into multiple smaller pieces and sell parts of the business to TSMC. The former Intel CEO called it the "dumbest idea around" that would squander the "accomplishments" made under Pat Gelsinger's leadership. Barrett, who ran Intel from 1998 to 2005, didn't mince words in his opinion piece published on Fortune, where he expressed a starkly different take. He says the only viable path forward for the company is to stay unified and double down on its latest 18A process node and imaging technologies like high NA EUV lithography.Perhaps even more radically, Barrett contends that Gelsinger should be brought back. That's because under the ousted CEO, Intel finally regained technical parity with TSMC at the 2nm node after years of stagnation, according to Barrett."Pat Gelsinger did a great job at resuscitating the technology development team," Barrett wrote, highlighting Intel's lead in novel areas like backside power delivery in addition to the 18A process itself. He added that a better move over simply breaking the company down might be to fire the board and rehire Gelsinger to "finish the job he has aptly handled over the past few years."The critique pulls no punches against the "well-meaning but off target" current Intel board members. He compared them in a seemingly sarcastic way to "two academics and two former government bureaucrats just the type of folks you want dictating strategy in the ruggedly competitive semiconductor industry."Barrett went so far as to place the blame for Intel's poor performance squarely on the shoulders of the board members, saying "they bear ultimate responsibility for what has happened to Intel over the last decade." // Related StoriesWhere Intel faltered in the past, per Barrett, was its outdated fabrication technologies. But now, with 18A bringing Intel's foundry ops up to speed, a split would only "introduce complications" rather than solve anything. Instead, he advises Intel to focus on "good customer service, fair pricing, guaranteed capacity, and a clear separation of chip designers from their foundry customers."While he opposes breaking Intel up entirely, Barrett does support splitting the company into a design firm and a separate foundry, as long as the foundry is not sold.Barrett signs off by noting that his criticism of Intel being split up stems from understanding "the intricacies of the semiconductor industry." He derides the plan as a "simplistic solution" that ignores just how difficult and time-consuming it is to develop and ramp leading-edge manufacturing tech."It takes years to develop a new semiconductor manufacturing technology and ramp it into volume production. Intel is about to regain its leadership in this area, and the dumbest idea around is to stall that from happening by slicing the company into pieces," he declared.
    0 Comments ·0 Shares ·41 Views
  • See the first images of the Blue Ghost lander on the surface of the moon
    www.digitaltrends.com
    With the arrival of the Blue Ghost lander on the moon this weekend, get ready for an influx of stunning new images from our planets natural satellite. The mission, from Firefly Aerospace, touched down in the moons Mare Crisium region yesterday, Sunday March 2, and the company has already shared the first images captured by the lander from its new home.As well as a striking image showing the shadow of the lander on the moons surface, seen above, another image shows the lander on the moon with the Earth visible in the night sky:This image shows the Moons surface, Earth on the horizon, and Blue Ghosts top deck with its solar panel, X-band antenna (left), and LEXI payload (right) in the view. Firefly AerospaceThis also shows two of the landers instruments, the X-band antenna for sending data back to Earth and the Lunar Environment Heliospheric X-ray Imager or LEXI telescope. This instrument will use X-rays to study how solar winds interact with the Earths magnetic field, and is one of several NASA instruments on board the lander.Recommended VideosRegarding the landing, NASA acting Administrator Janet Petro said in a statement: This incredible achievement demonstrates how NASA and American companies are leading the way in space exploration for the benefit of all. We have already learned many lessons and the technological and science demonstrations onboard Fireflys Blue Ghost Mission 1 will improve our ability to not only discover more science, but to ensure the safety of our spacecraft instruments for future human exploration both in the short term and long term.Please enable Javascript to view this contentA further image was also released, showing a top-down view of the surface with the landers thrusters visible as well:The image shows the Moons surface and a top-down view of the landers RCS thrusters (center) with a sun glare on the right side. Firefly AerospaceDeployment of the landers instrument has already begun, and today Firefly announced that the X-band antenna has been fully deployed. Compared to the landers S-band antennae, which are used to send lower quality images, the X-band antenna will allow higher quality images, science data, and even video to be sent back from the surface.The science and technology we send to the Moon now helps prepare the way for future NASA exploration and long-term human presence to inspire the world for generations to come, said Nicky Fox, NASAs associate administrator for science. Were sending these payloads by working with American companies which supports a growing lunar economy.Editors Recommendations
    0 Comments ·0 Shares ·37 Views
  • Nvidias sub-$350 GPU is now the most popular card on Steam
    www.digitaltrends.com
    Nvidias RTX 4060 has officially become the most widely used graphics card among gamers on Steam, thanks to its affordable price and solid performance for 1080p gaming. According to the latest Steam Hardware and Software Survey, the budget-friendly GPU has steadily gained traction since its mid-2023 launch, appealing to casual gamers, esports players, and budget-conscious PC builders.For years, older budget GPUs like the GTX 1650 and RTX 3060 dominated Steams charts. However, the RTX 4060 has now surpassed both, securing the top position with an 8.57% market share in February 2025. Its rise can be attributed to competitive pricing (around $300-$350), low power consumption, and modern gaming features like DLSS 3 and ray tracing support.Recommended VideosCompared to its predecessor, the RTX 3060, the RTX 4060 offers improvements in ray tracing, DLSS 3 frame generation, and overall efficiency. While some criticized its 8GB VRAM and narrower memory bus, it remains a solid choice for 1080p gaming, which aligns with the majority of Steam users setups. At the time of writing, the RTX 4060 is available anywhere from $300 to $350, which is similar to the RTX 3060.ValveThe RTX 3060, previously a dominant choice, now holds 6.87% of the market, reflecting a 1.67% increase from the previous month. The RTX 4060 Ti has also seen significant growth, rising by 3.11% to reach a 6.56% share. Similarly, the RTX 4070 experienced a 2.54% increase, bringing its total to 5.43%.Get your weekly teardown of the tech behind PC gaming The latest survey results highlight Nvidias overwhelming control of the PC gaming GPU market. The company occupies nearly all of the top spots, with AMD and Intel struggling to make significant gains in the consumer segment. Even as Nvidia moves forward with its RTX 50-series launch later this year, the affordability and accessibility of the RTX 4060 keep it relevant for budget-conscious gamers.As newer graphics cards hit the market, including the RTX 50-series and AMDs Radeon 9000 range, it will be interesting to see if the RTX 4060 can maintain its lead or if another mid-range option will dethrone it in the coming months.Editors Recommendations
    0 Comments ·0 Shares ·45 Views
  • Googles New Tech Means Video Calls May Not Be the Death of Us After All
    www.wsj.com
    Google and HPs videoconferencing platform, Project Starline, aims to make virtual meetings feel more like in-person interactions. Illustration: Thomas R. LechleiterGoogle and HP are scheduled to release this year a 3-D video communications platform that works without requiring users to wear glasses or a headset, an effort to infuse virtual meetings with a greater sense that people are together in the same space.Video calls famously turned heel in the past few years, transforming from a panacea of the early pandemic into a soul-sapping burden for workers. Alphabets Google and HP think Project Starline is a breakthrough sufficient to take virtual communications to the next level. And based on a shockingly visceral remote conversation I just had at HPs headquarters in Palo Alto, Calif., Id say they are on to something.
    0 Comments ·0 Shares ·37 Views
  • AI versus the brain and the race for general intelligence
    arstechnica.com
    Intelligence, artificial AI versus the brain and the race for general intelligence We already have an example of general intelligence, and it doesn't look like AI. John Timmer Mar 3, 2025 7:00 am | 3 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThere's no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That's gotten some people talking about the possibility that we're on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.Many arguments come down to the question of how AGI is defined, which people in the field can't seem to agree upon. This contributes to estimates of its advent that range from "it's practically here" to "we'll never achieve it." Given that range, it's impossible to provide any sort of informed perspective on how close we are.But we do have an existing example of AGI without the "A"the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It's entirely possible that there's more than one way to reach intelligence, depending on how it's defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.With all that in mind, let's look at some of the things the brain does that current AI systems can't.Defining AGI might helpArtificial general intelligence hasn't really been defined. Those who argue that it's imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI's arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the "G" of AGI and its implication of systems that are far less specialized.But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short."I think that AGI would be something that is going to be more robust, more stablenot necessarily smarter in general but more coherent in its abilities," said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. "You'd expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.""I think that's a big distinction, this idea of generalizability," echoed neuroscientist Christa Baker of NC State University. "You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it's not like now you're an idiot."Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.Beyond those specific limits, Baker noted that "there's long been this very human-centric idea of intelligence that only humans are intelligent." That's fallen away within the scientific community as we've studied more about animal behavior. But there's still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language modelsThe fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.These capabilities are complicated enough that it's not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we've created so far.Neurons vs. artificial neuronsMost current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.While that system is modeled on the behavior of some structures within the brain, it's a very limited approximation. For one, all artificial neurons are functionally equivalentthere's no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.Finally, while organized layers are a feature of a few structures in brains, they're far from the rule. "What we found is it'sat least in the flymuch more interconnected," Baker told Ars. "You can't really identify this strictly hierarchical network."With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are "finding lateral connections or feedback projections, or what we call recurrent loops, where we've got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate."While we're only beginning to understand the functional consequences of all this complexity, it's safe to say that it allows networks composed of actual neurons far more flexibility in how they process informationa flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we've talked about so far. They extend to significant differences in how these functional units are organized.The brain isnt monolithicThe neural networks we've generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.To give a sense of what this looks like, let's think about what's going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.Separately, there's part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you're engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehensionand requiring many of these systems to communicate among themselves.As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there's any emotional content to the material you're reading.All of these different areas are engaged without you being consciously aware of the need for them.In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That's in sharp contrast to a brain. "The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make," Baker noted. "There already a lot of constraints and specifics that are already set up."Even in cases where it's not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.In contrast, pre-planned modularity is relatively new to the AI world. In software development, "This concept of modularity is well established, so we have the whole methodology around it, how to manage it," Schain said, "it's really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain." There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.None of this is saying that a modular system can't arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there's no reason to think modularity will be valuable.There is some reason to believe that this modularity is key to the brain's incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that's not consistently the case; Baker noted that, "When you're talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech."This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we'll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.The brain is constantly trainingCurrent AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn't absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they're retained.That may be starting to change a bit, Schain said. "There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates," he told Ars. But it's still the case that neural networks are essentially useless without an extended training period.In contrast, a brain doesn't have distinct learning and active states; it's constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: "Once you have made your movement, the ball has left your hand, it's going to land somewhere. So that visual signalthat comparison of where it landed versus where you wanted it to gois what we call an error signal. That's detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time."It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). "Even if you're put into a situation where you've never been before, you can still figure it out," Baker said. "If you see a new object, you don't have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions."As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human's performance doesn't remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with "get off my lawn" would be indistinguishable.)Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.In contrast, it's essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they're presented as text. But here, there's still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it's best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.Dj vuFor Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, "memory" is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow "context window" that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on."For AI, it's very basic: It's like the memory is in the weights [of connections] or in the context. But with a human brain, it's a much more sophisticated mechanism, still to be uncovered. It's more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant."This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we've never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don't really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it's difficult to discuss at all. All we can really say is that there are clear differences there.Facing limitsIt's difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it's potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we'd find in a fly's brain and have nowhere near the fly's general capabilities.It remains possible that there is more than one route to those general capabilities and that some offshoot of today's AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we'll run into a serious roadblock: We don't fully understand the biology yet."I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has," Baker said. "That's just because we don't even know how it gets it; we don't know how that arises. So how do you build that into a system?"John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 3 Comments
    0 Comments ·0 Shares ·35 Views
  • What To Expect From the Intersection of AI and Biometrics
    www.informationweek.com
    You press your finger to your phone screen to access your bank account. You present your face to the camera to pass through security at the airport. Biometric authentication is a regular part of life and AI has been working behind the scenes in our lives for decades, too. And now, generative AIs increasing capabilities have thrust it to the forefront of nearly every conversation about technology. The seeming ubiquity of AI and biometrics suggests an inevitable convergence and we can already see this happening.As biometrics and AI are in use today, defining these technologies and their relationship to one another is not exactly cut and dry. The Biometrics Institute, which promotes ethical use of biometrics, asked its members about the relationship between the two technologies. The answers were conflicting.The organization published a paper, Members Viewpoints: The Relationship between Biometrics and Artificial Intelligence, sharing how some people view AI and biometrics as inextricably linked. Some say that biometrics are an adjunct to AI technology and as a consequence are always an integral part of it, according to the paper.On the other side of the debate, members argue that while the two technologies can be used together in many ways, some biometric applications exist quite separately from AI.Related:However you define AI and biometrics -- separately and together -- there are big questions for companies, governments, and individuals about the benefits, the risks, and responsible application.AI and More Powerful BiometricsWith AI tools readily available, threat actors are upping their game.AI-based attacks that we're seeing are right now primarily focused on how to compromise authentication systems through traditional mechanisms but made better by AI, Chace Hatcher, senior vice president of technology and strategy at cybersecurity company Telos, tells InformationWeek.It is harder to replicate a biometric marker than it is to compromise a password. Multi-modal biometric systems can enhance security in a time when attackers are always on the prowl for vulnerabilities and attack vectors.We can have a risk-based authentication system by layering the multiple biometric modalities accordingly, Geeta Gupta, head of AI and data science at Wink, a biometric authentication technology company, explains.The onslaught of AI-based attacks could drive more adoption of biometric security; the convenience of it is certainly another factor. And AI is in many ways powering stronger biometric capabilities.Related:AIs ability to analyze complex patterns is a clear boon in the biometrics space. It can pinpoint anomalies and recognize trends in vast swaths of biometric data. Perhaps humans could do the same but not nearly as quickly. Plus, AI systems can learn and improve over time. Fraud detection and prevention is made better.The underlying performance of the systems will get better, says Hatcher. The actual matching algorithms; they [will] work better: lower false rejections, lower false acceptance rates, less inherent bias in systems.Privacy and Security ConcernsBiometrics comes with obvious privacy concerns. When you hand over biometric data, you hand over immutable information unique to you. You cannot change your fingerprint or iris like you could a password. And as more biometric data is gathered -- and AI models have an insatiable need for data -- the risk of its compromise grows.If the data exists, and there's more of it, de facto it's more likely to be stolen, says Hatcher. Thats effectively true of any piece of data relevant to anything in the digital world.Threat actors can use AI-based attacks to go after biometric data with the goal of profiting from its sale. And then there is the concern that AI can be used to manipulate and mimic genuine biometric data.Related:Biometrics makes it harder to commit fraud, but the battle between cyber attackers and cyber defenders is never done. We have already seen examples of successful deepfake attacks. AI spoofing attacks aim to fool biometric systems into a false match and even use fake biometric data to pass security system checks, according to Biometric Update.You could make a sophisticated AI-based model of a known real human being now and fool some biometric systems out there with it, says Hatcher. I think anybody in the industry particularly in facial recognition and voice recognition is very concerned about it.Of course, enterprises are well aware of these threats, and there are ways to address them. Unsurprisingly, a fight-fire-with-fire approach is at the forefront. AI can be used to detect AI-based attacks on biometrics systems.As an example, AI models can undergo adversarial training, Gupta shares. Feed the model data that would be used in attacks against it to make it more resilient to real-life attempts.Anti-spoofing techniques can help thwart attempts to trick biometrics systems with manipulated or fake data.Most of the advanced systems [are] using infrared sensors to map the 3D contours of the face ensuring that the subject being scanned has a physical depth, unlike a flat photograph, Gupta shares as an example.And the more advanced systems have multiple mechanisms to verify identity and catch threat actors.In real time, we assess the variables and parameters like the geolocation of the person or the age of the person or any changes to the features of the person in real time, and we can enhance AI algorithms to learn from those changes, says Gupta.As organizations contemplate the risks that come with using AI and biometrics, data governance is essential. What data do organizations actually need to collect? How are they using it? How are they storing it?Organizations shouldnt collect data they don't need because you are creating a honey pot, says Hatcher. Hatcher also advocates for giving individuals more control over their identity information that is being stored. He hopes to see more tools that are cryptographically secure and embrace zero-knowledge proof; individuals can prove their identity without actually handing over their information.Ethical OutlookTogether, biometrics and AI can be a powerful way to combat fraud and verify identities. Over the course of its long history, which predates the technologys sophisticated digital iterations by centuries, it has been used in identity verification, citizenship registration, and criminology. But the capacity to categorize people has troubling possibilities.As a historian of biometrics and postdoctoral scholar in the social and ethical responsibilities of Computing at MIT's College of Computing, Michelle Spektor examines that history and its social impact.The same biometric data used to just simply identify someone, verify that they are who they say they are, has always had the capacity to be used to classify them or to single out people for discrimination, to make inferences about their personality, about their states of mind, to classify based on race, gender age, disability, to infer criminality, says Spektor.The question of bias is certainly a prominent one in the AI space. What happens when AI interprets biometric data and makes predictions about humans? The risk for bias and discriminatory outcomes are apparent.It is hard to imagine a world without biometrics and AI in it; the technologies are deeply ingrained in our day-to-day activities. But Spektor argues for a more mindful approach over simply assuming the use of this technology is always inevitable.Sometimes the question is also: Should we create or implement this technology at all? Is this the right context in which to use it? says Spektor.When AI and biometrics are used, enterprises and governments have a responsibility to do so securely and ethically. Many industry organizations have frameworks and principles to guide secure development and maintenance of systems that deploy AI and biometrics, but as those technologies evolve, these frameworks will have to as well.We don't know what kinds of capabilities will exist in the future. The kinds of things that can be done with that data, says Spektor. Having your, let's say, facial data stored somewhere, the realm of possibilities of what that could be used for has changed a lot over the last 10 years, 15 years, and will continue to change.
    0 Comments ·0 Shares ·57 Views