• I finally found a Windows mini PC with enough power to attract my attention - and it's on sale
    www.zdnet.com
    The Minisforum AI370 EliteMini packs high-end hardware and up to 4TB of storage into a sleek, compact design that is perfect for even the smallest desks.
    0 Kommentare ·0 Anteile ·68 Ansichten
  • This 2600W power station is more than $800 off right now - and I don't expect it to last
    www.zdnet.com
    A massive 47% has been slashed off the price of the Bluetti Elite 200 V2, making it a fantastic steal for power users.
    0 Kommentare ·0 Anteile ·68 Ansichten
  • The Emergence Of AI Operating Systems
    www.forbes.com
    AI OS is positioned to become the next major advancement in computing, moving beyond rule-based execution to introduce a learning-based approach to system management.
    0 Kommentare ·0 Anteile ·55 Ansichten
  • Yes, Snow White Is Bombing At The Box Office
    www.forbes.com
    Snow WhiteDisneySnow White did not impress critics with its 44% score on Rotten Tomatoes, but even with a higher 74% audience score, the movie is in fact currently bombing at the box office, at least according to these initial figures.Snow White brought in $87 million in its opening weekend globally, with just $43 million domestically. That may not mean much out of context, but compared to the other slew of recent live-action Disney adaptations, thats definitely in bomb territory, at least for now. Heres the list along the same opening weekend time period domestically:The Lion King - $191 millionBeauty and the Beast - $174 millionAlice in Wonderland - $116 millionThe Jungle Book - $103 millionThe Little Mermaid - $95 millionAladdin $91 millionMaleficent - $69 millionCinderella - $67 millionSnow White and the Hunstman - $56 millionDumbo $46 millionSnow White - $43 millionMufasa: The Lion King - $35 millionThere are very few ways to spin that. Snow White is close to the worst-performing live-action Disney adaptation ever, doing not even half or a quarter as well as the higher-up ones on the list, and reviewed more poorly than essentially all of them, which is no doubt a contributor here.Snow WhiteDisneyThe single bright spot of the film is said to be Rachel Zeglers performance and her singing, but villain Gal Gadot is criticized roundly, as is the overall structure, script and directing of the film (and those horrifying CGI dwarves). Again, you can even see the unconventional adaptation, Snow White and the Huntsman, opened better than this.There is one thing to note, however, that could be a glimmer of hope. The lowest film on this list, Mufasa: The Lion King, opened poorly, but over time, despite being an original production (which probably hurt it at the outset) snowballed into eventually making $717 million worldwide, a huge hit. But that feels like an anomaly rather than something thats going to happen with Snow White.The movie is bad, its adapting a 1937 film and not one of Disneys most popular ones, despite the wide reach of the character. Not a recipe for success, and it has not found any so far. As for Zegler, the only good aspect of the film, from here she will go to play Eva Pern in Evita in Londons West End, a prestigious role after her Broadway debut in Romeo + Juliet (which I had front row seats for, and she was fantastic). But this movie? Nope, this is unequivocally a bad situation.Follow me , and .
    0 Kommentare ·0 Anteile ·56 Ansichten
  • AI supercharges DNA data retrieval, making it 3,200 times faster
    www.techspot.com
    Forward-looking: Researchers around the world are embracing DNA-based storage right now. Mixing digital data and biology could bridge the best of both worlds, though a few challenges are still slowing market and industry adoption. Visionary solutions using DNA sequencing have been hailed as the future of the storage world for a few years now. Biology seems to have solved the data encoding problem a few billion years ago, so we could learn a thing or two from nature while we prepare to expand the world's digital realm to 180 zettabytes amounting to 180 billion terabytes by the end of 2025.Israeli researchers say they have found a way to significantly improve the data retrieval process, which is one of the biggest issues DNA storage technology is facing right now. A team at Technion Israel Institute of Technology used a specifically trained AI model to speed up data recovery from DNA strands by 3,200 times. Needless to say, the process is still much slower than "modern" storage technologies available on the market.The AI tech in question is known as DNAformer, and is based on a transformer model trained by Technion researchers on synthetic data. The data simulator that fed DNAformer was also created at Technion. The model can reconstruct accurate DNA sequences from error-prone copies and can boost data integrity even further thanks to a custom error-correcting algorithm designed to work well with DNA.DNAformer is much faster at retrieving data than previously unveiled methods. The AI model can read 100 megabytes 3,200 times faster than the most accurate existing method, and can seemingly do so with no loss of data. Accuracy is improved by "up to" 40 percent as well, which can further decrease the total retrieval process time.The Israeli researchers tested DNAformer's capabilities on a tiny 3.1-megabyte data set, which included a color still image, a 24-second audio clip, a written piece about DNA storage, and some random data. The latter was useful to show how the model can behave when dealing with encrypted or even compressed digital data. The team achieved a "data rate" of 1.6 bits per (DNA) base in a high-noise regime, the official study says, cutting the time needed to read the data back from several days to just 10 minutes. // Related StoriesThe Technion team said DNAformer will be further developed and tailored to different data storage needs. The technology can easily scale and adapt to various scenarios, with promising prospects for its adaptability. The researchers are already thinking about "market demands" and future improvements in DNA sequencing to improve their AI technology.
    0 Kommentare ·0 Anteile ·61 Ansichten
  • Perplexity still wants to buy TikTok, vows to rebuild algorithm and add community notes
    www.techspot.com
    In a nutshell: AI startup Perplexity is once again proposing the unlikely scenario that it takes over TikTok's US business operations. If the company were able to pull off this feat, it promises to make some major changes to the app, including rebuilding the algorithm, adding community notes, and open-sourcing the recommendation system. Perplexity first proposed a merger with TikTok's US operations in January. The plan would see the US government hold 50% ownership of the company but have no influence over TikTok's day-to-day operations and not be granted a seat on the merged entity's board.ByteDance has until April 5 to find a US buyer for TikTok's operations in the US, so Perplexity is once again putting forward a case for why it would be a good choice.The AI firm writes that it is singularly positioned to rebuild the TikTok algorithm without creating a monopoly, combining "world-class technical capabilities with Little Tech independence."It adds that any acquisition by a consortium of investors such as the group that includes YouTube star MrBeast could effectively keep ByteDance in control of the algorithm, while any acquisition by a competitor would likely create a monopoly in the short-form video space.Perplexity writes that "all of society benefits when content feeds are liberated from the manipulations of foreign governments and globalist monopolists." // Related StoriesPerplexity's pitch includes rebuilding TikTok's algorithm from the ground up in American data centers and with American oversight. It also wants to make the "For You" recommendation system transparent and open source.Another proposed change is the introduction of community notes, in which contributors can add context such as fact-checks under a post. The feature has proven so popular on former Twitter platform X that Meta CEO Mark Zuckerberg said he is introducing it to Facebook and Instagram.Perplexity writes that the community notes will sit alongside the citations feature used by its own search engine to "turn TikTok into the most neutral and trusted platform in the world."The rest of Perplexity's proposal includes upgrading TikTok's AI infrastructure using Nvidia Dynamo technology, enhancing the app's search feature with Perplexity's answer engine, and improve personalization for those who connect their Perplexity and TikTok accounts.Exactly where Perplexity would find the money to buy TikTok is unclear. It has been looking to raise funds at an $18 billion valuation, but TikTok's US operations have been valued at up to $50 billion.TikTok had until January 19, 2025 to find a US buyer or be banned in the US under legislation introduced in 2024. Donald Trump signed an executive order extending the deadline to April 5, 2025, when he took office, though the President has said he would "probably" extend it if necessary.
    0 Kommentare ·0 Anteile ·63 Ansichten
  • Apple AirPods Max finally get lossless audio and analog support
    www.digitaltrends.com
    Apple is about to correct one of the most glaring omissions on its AirPods Max wireless noise-canceling headphones: Starting in April, the headphones will get a firmware update that enables lossless audio via the included USB-C cable at up to 24-bit/48kHz.And starting today, Apple is selling a 3.5mm-to-USB-C accessory cable that lets the newest version of the AirPods Max connect to analog audio sources like airplane jacks something these headphones havent been able to do since they launched.Recommended VideosApple says that in addition to the obvious benefit of being able to finally listen to lossless audio sources (including all lossless tracks on Apple Music) without additional compression, youll also be able to use Personalized Spatial Audio (with or without head tracking). The company highlight the importance of this feature to creators and musicians: Next month, AirPods Max will become the only headphones that enable musicians to both create and mix in Personalized Spatial Audio with head tracking.Please enable Javascript to view this contentUnfortunately, it appears that support for lossless audio is limited to just the new, USB-C-equipped version of the AirPods Max, which were announced in October 2024. Ive asked Apple about lossless support for the original, lightning-equipped AirPods Max and will update this post if and when I hear back.While the addition of lossless and analog audio are welcome changes, I think Apple still has more work to do. The AirPods Max should have longer battery life, and Apple needs to work out a way to support lossless audio wirelessly, either via Bluetooth, Wi-Fi, or UWB. Lossless audio is great, but you shouldnt have to be tethered to your computer or phone to experience it.Editors Recommendations
    0 Kommentare ·0 Anteile ·74 Ansichten
  • Samsung Galaxy S25 Edge specs continue to leak as launch nears
    www.digitaltrends.com
    Samsung is preparing to launch another flagship model, the Galaxy S25 Edge. This slim phone was originally teased at the launch of the Galaxy S25 in January, with rumorspointing to an April 16 launch date for the new device.So far weve seen a range of specs for the phone, but thanks to a reliable leaker, it looks like were closer to some of the details. According to UniverseIce, posting on Weibo, the Galaxy S25 Edge will come with a titanium alloy frame.Recommended VideosThis isnt the first time that titanium has been suggested: we previously covered news of the colors of the S25 Edge, which are said to be Titanium Icyblue, Titanium Silver and Titanium Jetblack. Youll notice that these follow the styling of the Galaxy S25 Ultra, which also uses titanium in its construction. While there was some debate about the expected choice of materials aluminum and ceramic had been hinted it now looks like were set on titanium.Please enable Javascript to view this contentThat makes perfect sense: if youre creating a premium thin and light phone, then titanium is the best material for the job.Thats not the only detail that UniverseIce has shared, however. The leak also claims that the phone will come with a 2K display, suggesting that its going to have a resolution of 3120 x 1400 pixels. Its expected to be a 6.7-inch display, so its probably the same as the Galaxy S25 Plus screen.Its thought that Samsung will launch the Galaxy S25 Edge to compete with the anticipated iPhone Air, but use it to trial the design before replacing the regular Galaxy S models with a slimmer build, perhaps in 2026.Were expecting this to be a Snapdragon 8 Elite for Galaxy phone. Id want to see a big vapor chamber in there for cooling especially as SD 8 Elite seems to run a little warm but that might explain why the battery shrinks to 3,786mAh. Thats smaller than the Galaxy S25 which has a 4,000mAh battery. It seems that battery life could be the sacrifice were asked to make for going slimIf that 6.7-inch display is the same as the S25 Plus then it will be a 120Hz AMOLED with 2,600 nits peak brightness. The cameras are expected to be a 200-megapixel main and 12-megapixel ultrawide.The phone is said to cost around $1,400, which sounds pretty expensive and might not see enthusiastic adoption, but it feels like theres still a bit to learn about Samsungs plans to slim down its phones.Editors Recommendations
    0 Kommentare ·0 Anteile ·72 Ansichten
  • Ubisoft Shares Surge After Blockbuster Launch of New Assassins Creed Game
    www.wsj.com
    Shares jumped after Assassins Creed Shadows garnered 2 million players less than a week since its release, surpassing the launches of Assassins Creed Origins and Assassins Creed Odyssey.
    0 Kommentare ·0 Anteile ·68 Ansichten
  • Can we make AI less power-hungry? These researchers are working on it.
    arstechnica.com
    feeding the beast Can we make AI less power-hungry? These researchers are working on it. As demand surges, figuring out the performance of proprietary models is half the battle. Jacek Krywko Mar 24, 2025 7:00 am | 21 Credit: Igor Borisenko/Getty Images Credit: Igor Borisenko/Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAt the beginning of November 2024, the US Federal Energy Regulatory Commission (FERC) rejected Amazons request to buy an additional 180 megawatts of power directly from the Susquehanna nuclear power plant for a data center located nearby. The rejection was due to the argument that buying power directly instead of getting it through the grid like everyone else works against the interests of other users.Demand for power in the US has been flat for nearly 20 years. But now were seeing load forecasts shooting up. Depending on [what] numbers you want to accept, theyre either skyrocketing or theyre just rapidly increasing, said Mark Christie, a FERC commissioner.Part of the surge in demand comes from data centers, and their increasing thirst for power comes in part from running increasingly sophisticated AI models. As with all world-shaping developments, what set this trend into motion was visionquite literally.The AlexNet momentBack in 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, AI researchers at the University of Toronto, were busy working on a convolution neural network (CNN) for the ImageNet LSRVC, an image-recognition contest. The contests rules were fairly simple: A team had to build an AI system that could categorize images sourced from a database comprising over a million labeled pictures.The task was extremely challenging at the time, so the team figured they needed a really big neural netway bigger than anything other research teams had attempted. AlexNet, named after the lead researcher, had multiple layers, with over 60 million parameters and 650 thousand neurons. The problem with a behemoth like that was how to train it.What the team had in their lab were a few Nvidia GTX 580s, each with 3GB of memory. As the researchers wrote in their paper, AlexNet was simply too big to fit on any single GPU they had. So they figured out how to split AlexNets training phase between two GPUs working in parallelhalf of the neurons ran on one GPU, and the other half ran on the other GPU.AlexNet won the 2012 competition by a landslide, but the team accomplished something way more profound. The size of AI models was once and for all decoupled from what was possible to do on a single CPU or GPU. The genie was out of the bottle.(The AlexNet source code was recently made available through the Computer History Museum.)The balancing actAfter AlexNet, using multiple GPUs to train AI became a no-brainer. Increasingly powerful AIs used tens of GPUs, then hundreds, thousands, and more. But it took some time before this trend started making its presence felt on the grid. According to an Electric Power Research Institute (EPRI) report, the power consumption of data centers was relatively flat between 2010 and 2020. That doesnt mean the demand for data center services was flat, but the improvements in data centers energy efficiency were sufficient to offset the fact we were using them more.Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward, said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 20102020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady.All that changed with the rise of enormous large language transformer models, starting with ChatGPT in 2022. There was a very big jump when transformers became mainstream, said Mosharaf Chowdhury, a professor at the University of Michigan. (Chowdhury is also at the ML Energy Initiative, a research group focusing on making AI more energy-efficient.)Nvidia has kept up its efficiency improvements, with a ten-fold boost between 2020 and today. The company also kept improving chips that were already deployed. A lot of where this efficiency comes from was software optimization. Only last year, we improved the overall performance of Hopper by about 5x, Harris said. Despite these efficiency gains, based on Lawrence Berkely National Laboratory estimates, the US saw data center power consumption shoot up from around 76 TWh in 2018 to 176 TWh in 2023.The AI lifecycleLLMs work with tens of billions of neurons approaching a number rivalingand perhaps even surpassingthose in the human brain. The GPT 4 is estimated to work with around 100 billion neurons distributed over 100 layers and over 100 trillion parameters that define the strength of connections among the neurons. These parameters are set during training, when the AI is fed huge amounts of data and learns by adjusting these values. Thats followed by the inference phase, where it gets busy processing queries coming in every day.The training phase is a gargantuan computational effortOpen AI supposedly used over 25,000 Nvidia Ampere 100 GPUs running on all cylinders for 100 days. The estimated power consumption is 50 GW-hours, which is enough to power a medium-sized town for a year. According to numbers released by Google, training accounts for 40 percent of the total AI model power consumption over its lifecycle. The remaining 60 percent is inference, where power consumption figures are less spectacular but add up over time.Trimming AI models downThe increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. One way to go about it is reducing the amount of computation, said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative.One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) the optimal brain damage. You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. You take a large model and distill it into a smaller model trying to preserve the quality, Chung explained.You can also make those remaining parameters leaner with a trick called quantization. Parameters in neural nets are usually represented as a single-precision floating point number, occupying 32 bits of computer memory. But you can change the format of parameters to a smaller one that reduces the amount of needed memory and makes the computation faster, Chung said.Shrinking an individual parameter has a minor effect, but when there are billions of them, it adds up. Its also possible to do quantization-aware training, which performs quantization at the training stage. According to Nvidia, which implemented quantization training in its AI model optimization toolkit, this should cut the memory requirements by 29 to 51 percent.Pruning and quantization belong to a category of optimization techniques that rely on tweaking the way AI models work internallyhow many parameters they use and how memory-intensive their storage is. These techniques are like tuning an engine in a car to make it go faster and use less fuel. But there's another category of techniques that focus on the processes computers use to run those AI models instead of the models themselvesakin to speeding a car up by timing the traffic lights better.Finishing firstApart from optimizing the AI models themselves, we could also optimize the way data centers run them. Splitting the training phase workload evenly among 25 thousand GPUs introduces inefficiencies. When you split the model into 100,000 GPUs, you end up slicing and dicing it in multiple dimensions, and it is very difficult to make every piece exactly the same size, Chung said.GPUs that have been given significantly larger workloads have increased power consumption that is not necessarily balanced out by those with smaller loads. Chung figured that if GPUs with smaller workloads ran slower, consuming much less power, they would finish roughly at the same time as GPUs processing larger workloads operating at full speed. The trick was to pace each GPU in such a way that the whole cluster would finish at the same time.To make that happen, Chung built a software tool called Perseus that identified the scope of the workloads assigned to each GPU in a cluster. Perseus takes the estimated time needed to complete the largest workload on a GPU running at full. It then estimates how much computation must be done on each of the remaining GPUs and determines what speed to run them so they finish at the same. Perseus precisely slows some of the GPUs down, and slowing down means less energy. But the end-to-end speed is the same, Chung said.The team tested Perseus by training the publicly available GPT-3, as well as other large language models and a computer vision AI. The results were promising. Perseus could cut up to 30 percent of energy for the whole thing, Chung said. He said the team is talking about deploying Perseus at Meta, but it takes a long time to deploy something at a large company.Are all those optimizations to the models and the way data centers run them enough to keep us in the green? It takes roughly a year or two to plan and build a data center, but it can take longer than that to build a power plant. So are we winning this race or losing? Its a bit hard to say.Back of the envelopeAs the increasing power consumption of data centers became apparent, research groups tried to quantify the problem. A Lawerence Berkley Laboratory team estimated that data centers annual energy draw in 2028 would be between 325 and 580 TWh in the USthats between 6.7 and 12 percent of the total US electricity consumption. The International Energy Agency thinks it will be around 6 percent by 2026. Goldman Sachs Research says 8 percent by 2030, while EPRI claims between 4.6 and 9.1 percent by 2030.EPRI also warns that the impact will be even worse because data centers tend to be concentrated at locations investors think are advantageous, like Virginia, which already sends 25 percent of its electricity to data centers. In Ireland, data centers are expected to consume one-third of the electricity produced in the entire country in the near future. And thats just the beginning.Running huge AI models like ChatGPT is one of the most power-intensive things that data centers do, but it accounts for roughly 12 percent of their operations, according to Nvidia. That is expected to change if companies like Google start to weave conversational LLMs into their most popular services. The EPRI report estimates that a single Google search today uses around 0.3 watts of power, while a single Chat GPT query bumps that up to 2.9 watts. Based on those values, the report estimates that an AI-powered Google search would require Google to deploy 400,000 new servers that would consume 22.8 TWh per year.AI searches take 10x the electricity of a non-AI search, Christie, the FERC commissioner, said at a FERC-organized conference. When FERC commissioners are using those numbers, youd think there would be rock-solid science backing them up. But when Ars asked Chowdhury and Chung about their thoughts on these estimates, they exchanged looks and smiled.Closed AI problemChowdhury and Chung don't think those numbers are particularly credible. They feel we know nothing about what's going on inside commercial AI systems like ChatGPT or Gemini, because OpenAI and Google have never released actual power-consumption figures.They didnt publish any real numbers, any academic papers. The only number, 0.3 watts per Google search, appeared in some blog post or other PR-related thingy, Chodwhury said. We dont know how this power consumption was measured, on what hardware, or under what conditions, he said. But at least it came directly from Google.When you take that 10x Google vs ChatGPT equation or whateverone part is half-known, the other part is unknown, and then the division is done by some third party that has no relationship with Google nor with Open AI, Chowdhury said.Googles PR-related thingy was published back in 2009, while the 2.9-watts-per-ChatGPT-query figure was probably based on a comment about the number of GPUs needed to train GPT-4 made by Jensen Huang, Nvidias CEO, in 2024. That means the 10x AI versus non-AI search claim was actually based on power consumption achieved on entirely different generations of hardware separated by 15 years. But the number seemed plausible, so people keep repeating it, Chowdhury said.All reports we have today were done by third parties that are not affiliated with the companies building big AIs, and yet they arrive at weirdly specific numbers. They take numbers that are just estimates, then multiply those by a whole lot of other numbers and get back with statements like AI consumes more energy than Britain, or more than Africa, or something like that. The truth is they dont know that, Chowdhury said.He argues that better numbers would require benchmarking AI models using a formal testing procedure that could be verified through the peer-review process.As it turns out, the ML Energy Initiative defined just such a testing procedure and ran the benchmarks on any AI models they could get ahold of. The group then posted the results online on their ML.ENERGY Leaderboard.AI-efficiency leaderboardTo get good numbers, the first thing the ML Energy Initiative got rid of was the idea of estimating how power-hungry GPU chips are by using their thermal design power (TDP), which is basically their maximum power consumption. Using TDP was a bit like rating a cars efficiency based on how much fuel it burned running at full speed. Thats not how people usually drive, and thats not how GPUs work when running AI models. So Chung built ZeusMonitor, an all-in-one solution that measured GPU power consumption on the fly.For the tests, his team used setups with Nvidias A100 and H100 GPUs, the ones most commonly used at data centers today, and measured how much energy they used running various large language models (LLMs), diffusion models that generate pictures or videos based on text input, and many other types of AI systems.The largest LLM included in the leaderboard was Metas Llama 3.1 405B, an open-source chat-based AI with 405 billion parameters. It consumed 3352.92 joules of energy per request running on two H100 GPUs. Thats around 0.93 watt-hourssignificantly less than 2.9 watt-hours quoted for ChatGPT queries. These measurements confirmed the improvements in the energy efficiency of hardware. Mixtral 8x22B was the largest LLM the team managed to run on both Ampere and Hopper platforms. Running the model on two Ampere GPUs resulted in 0.32 watt-hours per request, compared to just 0.15 watt-hours on one Hopper GPU.What remains unknown, however, is the performance of proprietary models like GPT-4, Gemini, or Grok. The ML Energy Initiative team says it's very hard for the research community to start coming up with solutions to the energy efficiency problems when we dont even know what exactly were facing. We can make estimates, but Chung insists they need to be accompanied by error-bound analysis. We dont have anything like that today.The most pressing issue, according to Chung and Chowdhury, is the lack of transparency. Companies like Google or Open AI have no incentive to talk about power consumption. If anything, releasing actual numbers would harm them, Chowdhury said. But people should understand what is actually happening, so maybe we should somehow coax them into releasing some of those numbers.Where rubber meets the roadEnergy efficiency in data centers follows the trend similar to Moores lawonly working at a very large scale, instead of on a single chip, Nvidia's Harris said. The power consumption per rack, a unit used in data centers housing between 10 and 14 Nvidia GPUs, is going up, he said, but the performance-per-watt is getting better.When you consider all the innovations going on in software optimization, cooling systems, MEP (mechanical, electrical, and plumbing), and GPUs themselves, we have a lot of headroom, Harris said. He expects this large-scale variant of Moores law to keep going for quite some time, even without any radical changes in technology.There are also more revolutionary technologies looming on the horizon. The idea that drove companies like Nvidia to their current market status was the concept that you could offload certain tasks from the CPU to dedicated, purpose-built hardware. But now, even GPUs will probably use their own accelerators in the future. Neural nets and other parallel computation tasks could be implemented on photonic chips that use light instead of electrons to process information. Photonic computing devices are orders of magnitude more energy-efficient than the GPUs we have today and can run neural networks literally at the speed of light.Another innovation to look forward to is 2D semiconductors, which enable building incredibly small transistors and stacking them vertically, vastly improving the computation density possible within a given chip area. We are looking at a lot of these technologies, trying to assess where we can take them, Harris said. But where rubber really meets the road is how you deploy them at scale. Its probably a bit early to say where the future bang for buck will be.The problem is when we are making a resource more efficient, we simply end up using it more. It is a Jevons paradox, known since the beginnings of the industrial age. But will AI energy consumption increase so much that it causes an apocalypse? Chung doesn't think so. According to Chowdhury, if we run out of energy to power up our progress, we will simply slow down.But people have always been very good at finding the way, Chowdhury added.Jacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 21 Comments
    0 Kommentare ·0 Anteile ·90 Ansichten