• Yes, Snow White Is Bombing At The Box Office
    www.forbes.com
    Snow WhiteDisneySnow White did not impress critics with its 44% score on Rotten Tomatoes, but even with a higher 74% audience score, the movie is in fact currently bombing at the box office, at least according to these initial figures.Snow White brought in $87 million in its opening weekend globally, with just $43 million domestically. That may not mean much out of context, but compared to the other slew of recent live-action Disney adaptations, thats definitely in bomb territory, at least for now. Heres the list along the same opening weekend time period domestically:The Lion King - $191 millionBeauty and the Beast - $174 millionAlice in Wonderland - $116 millionThe Jungle Book - $103 millionThe Little Mermaid - $95 millionAladdin $91 millionMaleficent - $69 millionCinderella - $67 millionSnow White and the Hunstman - $56 millionDumbo $46 millionSnow White - $43 millionMufasa: The Lion King - $35 millionThere are very few ways to spin that. Snow White is close to the worst-performing live-action Disney adaptation ever, doing not even half or a quarter as well as the higher-up ones on the list, and reviewed more poorly than essentially all of them, which is no doubt a contributor here.Snow WhiteDisneyThe single bright spot of the film is said to be Rachel Zeglers performance and her singing, but villain Gal Gadot is criticized roundly, as is the overall structure, script and directing of the film (and those horrifying CGI dwarves). Again, you can even see the unconventional adaptation, Snow White and the Huntsman, opened better than this.There is one thing to note, however, that could be a glimmer of hope. The lowest film on this list, Mufasa: The Lion King, opened poorly, but over time, despite being an original production (which probably hurt it at the outset) snowballed into eventually making $717 million worldwide, a huge hit. But that feels like an anomaly rather than something thats going to happen with Snow White.The movie is bad, its adapting a 1937 film and not one of Disneys most popular ones, despite the wide reach of the character. Not a recipe for success, and it has not found any so far. As for Zegler, the only good aspect of the film, from here she will go to play Eva Pern in Evita in Londons West End, a prestigious role after her Broadway debut in Romeo + Juliet (which I had front row seats for, and she was fantastic). But this movie? Nope, this is unequivocally a bad situation.Follow me , and .
    0 Reacties ·0 aandelen ·66 Views
  • AI supercharges DNA data retrieval, making it 3,200 times faster
    www.techspot.com
    Forward-looking: Researchers around the world are embracing DNA-based storage right now. Mixing digital data and biology could bridge the best of both worlds, though a few challenges are still slowing market and industry adoption. Visionary solutions using DNA sequencing have been hailed as the future of the storage world for a few years now. Biology seems to have solved the data encoding problem a few billion years ago, so we could learn a thing or two from nature while we prepare to expand the world's digital realm to 180 zettabytes amounting to 180 billion terabytes by the end of 2025.Israeli researchers say they have found a way to significantly improve the data retrieval process, which is one of the biggest issues DNA storage technology is facing right now. A team at Technion Israel Institute of Technology used a specifically trained AI model to speed up data recovery from DNA strands by 3,200 times. Needless to say, the process is still much slower than "modern" storage technologies available on the market.The AI tech in question is known as DNAformer, and is based on a transformer model trained by Technion researchers on synthetic data. The data simulator that fed DNAformer was also created at Technion. The model can reconstruct accurate DNA sequences from error-prone copies and can boost data integrity even further thanks to a custom error-correcting algorithm designed to work well with DNA.DNAformer is much faster at retrieving data than previously unveiled methods. The AI model can read 100 megabytes 3,200 times faster than the most accurate existing method, and can seemingly do so with no loss of data. Accuracy is improved by "up to" 40 percent as well, which can further decrease the total retrieval process time.The Israeli researchers tested DNAformer's capabilities on a tiny 3.1-megabyte data set, which included a color still image, a 24-second audio clip, a written piece about DNA storage, and some random data. The latter was useful to show how the model can behave when dealing with encrypted or even compressed digital data. The team achieved a "data rate" of 1.6 bits per (DNA) base in a high-noise regime, the official study says, cutting the time needed to read the data back from several days to just 10 minutes. // Related StoriesThe Technion team said DNAformer will be further developed and tailored to different data storage needs. The technology can easily scale and adapt to various scenarios, with promising prospects for its adaptability. The researchers are already thinking about "market demands" and future improvements in DNA sequencing to improve their AI technology.
    0 Reacties ·0 aandelen ·80 Views
  • Perplexity still wants to buy TikTok, vows to rebuild algorithm and add community notes
    www.techspot.com
    In a nutshell: AI startup Perplexity is once again proposing the unlikely scenario that it takes over TikTok's US business operations. If the company were able to pull off this feat, it promises to make some major changes to the app, including rebuilding the algorithm, adding community notes, and open-sourcing the recommendation system. Perplexity first proposed a merger with TikTok's US operations in January. The plan would see the US government hold 50% ownership of the company but have no influence over TikTok's day-to-day operations and not be granted a seat on the merged entity's board.ByteDance has until April 5 to find a US buyer for TikTok's operations in the US, so Perplexity is once again putting forward a case for why it would be a good choice.The AI firm writes that it is singularly positioned to rebuild the TikTok algorithm without creating a monopoly, combining "world-class technical capabilities with Little Tech independence."It adds that any acquisition by a consortium of investors such as the group that includes YouTube star MrBeast could effectively keep ByteDance in control of the algorithm, while any acquisition by a competitor would likely create a monopoly in the short-form video space.Perplexity writes that "all of society benefits when content feeds are liberated from the manipulations of foreign governments and globalist monopolists." // Related StoriesPerplexity's pitch includes rebuilding TikTok's algorithm from the ground up in American data centers and with American oversight. It also wants to make the "For You" recommendation system transparent and open source.Another proposed change is the introduction of community notes, in which contributors can add context such as fact-checks under a post. The feature has proven so popular on former Twitter platform X that Meta CEO Mark Zuckerberg said he is introducing it to Facebook and Instagram.Perplexity writes that the community notes will sit alongside the citations feature used by its own search engine to "turn TikTok into the most neutral and trusted platform in the world."The rest of Perplexity's proposal includes upgrading TikTok's AI infrastructure using Nvidia Dynamo technology, enhancing the app's search feature with Perplexity's answer engine, and improve personalization for those who connect their Perplexity and TikTok accounts.Exactly where Perplexity would find the money to buy TikTok is unclear. It has been looking to raise funds at an $18 billion valuation, but TikTok's US operations have been valued at up to $50 billion.TikTok had until January 19, 2025 to find a US buyer or be banned in the US under legislation introduced in 2024. Donald Trump signed an executive order extending the deadline to April 5, 2025, when he took office, though the President has said he would "probably" extend it if necessary.
    0 Reacties ·0 aandelen ·73 Views
  • Apple AirPods Max finally get lossless audio and analog support
    www.digitaltrends.com
    Apple is about to correct one of the most glaring omissions on its AirPods Max wireless noise-canceling headphones: Starting in April, the headphones will get a firmware update that enables lossless audio via the included USB-C cable at up to 24-bit/48kHz.And starting today, Apple is selling a 3.5mm-to-USB-C accessory cable that lets the newest version of the AirPods Max connect to analog audio sources like airplane jacks something these headphones havent been able to do since they launched.Recommended VideosApple says that in addition to the obvious benefit of being able to finally listen to lossless audio sources (including all lossless tracks on Apple Music) without additional compression, youll also be able to use Personalized Spatial Audio (with or without head tracking). The company highlight the importance of this feature to creators and musicians: Next month, AirPods Max will become the only headphones that enable musicians to both create and mix in Personalized Spatial Audio with head tracking.Please enable Javascript to view this contentUnfortunately, it appears that support for lossless audio is limited to just the new, USB-C-equipped version of the AirPods Max, which were announced in October 2024. Ive asked Apple about lossless support for the original, lightning-equipped AirPods Max and will update this post if and when I hear back.While the addition of lossless and analog audio are welcome changes, I think Apple still has more work to do. The AirPods Max should have longer battery life, and Apple needs to work out a way to support lossless audio wirelessly, either via Bluetooth, Wi-Fi, or UWB. Lossless audio is great, but you shouldnt have to be tethered to your computer or phone to experience it.Editors Recommendations
    0 Reacties ·0 aandelen ·82 Views
  • Samsung Galaxy S25 Edge specs continue to leak as launch nears
    www.digitaltrends.com
    Samsung is preparing to launch another flagship model, the Galaxy S25 Edge. This slim phone was originally teased at the launch of the Galaxy S25 in January, with rumorspointing to an April 16 launch date for the new device.So far weve seen a range of specs for the phone, but thanks to a reliable leaker, it looks like were closer to some of the details. According to UniverseIce, posting on Weibo, the Galaxy S25 Edge will come with a titanium alloy frame.Recommended VideosThis isnt the first time that titanium has been suggested: we previously covered news of the colors of the S25 Edge, which are said to be Titanium Icyblue, Titanium Silver and Titanium Jetblack. Youll notice that these follow the styling of the Galaxy S25 Ultra, which also uses titanium in its construction. While there was some debate about the expected choice of materials aluminum and ceramic had been hinted it now looks like were set on titanium.Please enable Javascript to view this contentThat makes perfect sense: if youre creating a premium thin and light phone, then titanium is the best material for the job.Thats not the only detail that UniverseIce has shared, however. The leak also claims that the phone will come with a 2K display, suggesting that its going to have a resolution of 3120 x 1400 pixels. Its expected to be a 6.7-inch display, so its probably the same as the Galaxy S25 Plus screen.Its thought that Samsung will launch the Galaxy S25 Edge to compete with the anticipated iPhone Air, but use it to trial the design before replacing the regular Galaxy S models with a slimmer build, perhaps in 2026.Were expecting this to be a Snapdragon 8 Elite for Galaxy phone. Id want to see a big vapor chamber in there for cooling especially as SD 8 Elite seems to run a little warm but that might explain why the battery shrinks to 3,786mAh. Thats smaller than the Galaxy S25 which has a 4,000mAh battery. It seems that battery life could be the sacrifice were asked to make for going slimIf that 6.7-inch display is the same as the S25 Plus then it will be a 120Hz AMOLED with 2,600 nits peak brightness. The cameras are expected to be a 200-megapixel main and 12-megapixel ultrawide.The phone is said to cost around $1,400, which sounds pretty expensive and might not see enthusiastic adoption, but it feels like theres still a bit to learn about Samsungs plans to slim down its phones.Editors Recommendations
    0 Reacties ·0 aandelen ·79 Views
  • Ubisoft Shares Surge After Blockbuster Launch of New Assassins Creed Game
    www.wsj.com
    Shares jumped after Assassins Creed Shadows garnered 2 million players less than a week since its release, surpassing the launches of Assassins Creed Origins and Assassins Creed Odyssey.
    0 Reacties ·0 aandelen ·77 Views
  • Can we make AI less power-hungry? These researchers are working on it.
    arstechnica.com
    feeding the beast Can we make AI less power-hungry? These researchers are working on it. As demand surges, figuring out the performance of proprietary models is half the battle. Jacek Krywko Mar 24, 2025 7:00 am | 21 Credit: Igor Borisenko/Getty Images Credit: Igor Borisenko/Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAt the beginning of November 2024, the US Federal Energy Regulatory Commission (FERC) rejected Amazons request to buy an additional 180 megawatts of power directly from the Susquehanna nuclear power plant for a data center located nearby. The rejection was due to the argument that buying power directly instead of getting it through the grid like everyone else works against the interests of other users.Demand for power in the US has been flat for nearly 20 years. But now were seeing load forecasts shooting up. Depending on [what] numbers you want to accept, theyre either skyrocketing or theyre just rapidly increasing, said Mark Christie, a FERC commissioner.Part of the surge in demand comes from data centers, and their increasing thirst for power comes in part from running increasingly sophisticated AI models. As with all world-shaping developments, what set this trend into motion was visionquite literally.The AlexNet momentBack in 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, AI researchers at the University of Toronto, were busy working on a convolution neural network (CNN) for the ImageNet LSRVC, an image-recognition contest. The contests rules were fairly simple: A team had to build an AI system that could categorize images sourced from a database comprising over a million labeled pictures.The task was extremely challenging at the time, so the team figured they needed a really big neural netway bigger than anything other research teams had attempted. AlexNet, named after the lead researcher, had multiple layers, with over 60 million parameters and 650 thousand neurons. The problem with a behemoth like that was how to train it.What the team had in their lab were a few Nvidia GTX 580s, each with 3GB of memory. As the researchers wrote in their paper, AlexNet was simply too big to fit on any single GPU they had. So they figured out how to split AlexNets training phase between two GPUs working in parallelhalf of the neurons ran on one GPU, and the other half ran on the other GPU.AlexNet won the 2012 competition by a landslide, but the team accomplished something way more profound. The size of AI models was once and for all decoupled from what was possible to do on a single CPU or GPU. The genie was out of the bottle.(The AlexNet source code was recently made available through the Computer History Museum.)The balancing actAfter AlexNet, using multiple GPUs to train AI became a no-brainer. Increasingly powerful AIs used tens of GPUs, then hundreds, thousands, and more. But it took some time before this trend started making its presence felt on the grid. According to an Electric Power Research Institute (EPRI) report, the power consumption of data centers was relatively flat between 2010 and 2020. That doesnt mean the demand for data center services was flat, but the improvements in data centers energy efficiency were sufficient to offset the fact we were using them more.Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward, said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 20102020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady.All that changed with the rise of enormous large language transformer models, starting with ChatGPT in 2022. There was a very big jump when transformers became mainstream, said Mosharaf Chowdhury, a professor at the University of Michigan. (Chowdhury is also at the ML Energy Initiative, a research group focusing on making AI more energy-efficient.)Nvidia has kept up its efficiency improvements, with a ten-fold boost between 2020 and today. The company also kept improving chips that were already deployed. A lot of where this efficiency comes from was software optimization. Only last year, we improved the overall performance of Hopper by about 5x, Harris said. Despite these efficiency gains, based on Lawrence Berkely National Laboratory estimates, the US saw data center power consumption shoot up from around 76 TWh in 2018 to 176 TWh in 2023.The AI lifecycleLLMs work with tens of billions of neurons approaching a number rivalingand perhaps even surpassingthose in the human brain. The GPT 4 is estimated to work with around 100 billion neurons distributed over 100 layers and over 100 trillion parameters that define the strength of connections among the neurons. These parameters are set during training, when the AI is fed huge amounts of data and learns by adjusting these values. Thats followed by the inference phase, where it gets busy processing queries coming in every day.The training phase is a gargantuan computational effortOpen AI supposedly used over 25,000 Nvidia Ampere 100 GPUs running on all cylinders for 100 days. The estimated power consumption is 50 GW-hours, which is enough to power a medium-sized town for a year. According to numbers released by Google, training accounts for 40 percent of the total AI model power consumption over its lifecycle. The remaining 60 percent is inference, where power consumption figures are less spectacular but add up over time.Trimming AI models downThe increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. One way to go about it is reducing the amount of computation, said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative.One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) the optimal brain damage. You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. You take a large model and distill it into a smaller model trying to preserve the quality, Chung explained.You can also make those remaining parameters leaner with a trick called quantization. Parameters in neural nets are usually represented as a single-precision floating point number, occupying 32 bits of computer memory. But you can change the format of parameters to a smaller one that reduces the amount of needed memory and makes the computation faster, Chung said.Shrinking an individual parameter has a minor effect, but when there are billions of them, it adds up. Its also possible to do quantization-aware training, which performs quantization at the training stage. According to Nvidia, which implemented quantization training in its AI model optimization toolkit, this should cut the memory requirements by 29 to 51 percent.Pruning and quantization belong to a category of optimization techniques that rely on tweaking the way AI models work internallyhow many parameters they use and how memory-intensive their storage is. These techniques are like tuning an engine in a car to make it go faster and use less fuel. But there's another category of techniques that focus on the processes computers use to run those AI models instead of the models themselvesakin to speeding a car up by timing the traffic lights better.Finishing firstApart from optimizing the AI models themselves, we could also optimize the way data centers run them. Splitting the training phase workload evenly among 25 thousand GPUs introduces inefficiencies. When you split the model into 100,000 GPUs, you end up slicing and dicing it in multiple dimensions, and it is very difficult to make every piece exactly the same size, Chung said.GPUs that have been given significantly larger workloads have increased power consumption that is not necessarily balanced out by those with smaller loads. Chung figured that if GPUs with smaller workloads ran slower, consuming much less power, they would finish roughly at the same time as GPUs processing larger workloads operating at full speed. The trick was to pace each GPU in such a way that the whole cluster would finish at the same time.To make that happen, Chung built a software tool called Perseus that identified the scope of the workloads assigned to each GPU in a cluster. Perseus takes the estimated time needed to complete the largest workload on a GPU running at full. It then estimates how much computation must be done on each of the remaining GPUs and determines what speed to run them so they finish at the same. Perseus precisely slows some of the GPUs down, and slowing down means less energy. But the end-to-end speed is the same, Chung said.The team tested Perseus by training the publicly available GPT-3, as well as other large language models and a computer vision AI. The results were promising. Perseus could cut up to 30 percent of energy for the whole thing, Chung said. He said the team is talking about deploying Perseus at Meta, but it takes a long time to deploy something at a large company.Are all those optimizations to the models and the way data centers run them enough to keep us in the green? It takes roughly a year or two to plan and build a data center, but it can take longer than that to build a power plant. So are we winning this race or losing? Its a bit hard to say.Back of the envelopeAs the increasing power consumption of data centers became apparent, research groups tried to quantify the problem. A Lawerence Berkley Laboratory team estimated that data centers annual energy draw in 2028 would be between 325 and 580 TWh in the USthats between 6.7 and 12 percent of the total US electricity consumption. The International Energy Agency thinks it will be around 6 percent by 2026. Goldman Sachs Research says 8 percent by 2030, while EPRI claims between 4.6 and 9.1 percent by 2030.EPRI also warns that the impact will be even worse because data centers tend to be concentrated at locations investors think are advantageous, like Virginia, which already sends 25 percent of its electricity to data centers. In Ireland, data centers are expected to consume one-third of the electricity produced in the entire country in the near future. And thats just the beginning.Running huge AI models like ChatGPT is one of the most power-intensive things that data centers do, but it accounts for roughly 12 percent of their operations, according to Nvidia. That is expected to change if companies like Google start to weave conversational LLMs into their most popular services. The EPRI report estimates that a single Google search today uses around 0.3 watts of power, while a single Chat GPT query bumps that up to 2.9 watts. Based on those values, the report estimates that an AI-powered Google search would require Google to deploy 400,000 new servers that would consume 22.8 TWh per year.AI searches take 10x the electricity of a non-AI search, Christie, the FERC commissioner, said at a FERC-organized conference. When FERC commissioners are using those numbers, youd think there would be rock-solid science backing them up. But when Ars asked Chowdhury and Chung about their thoughts on these estimates, they exchanged looks and smiled.Closed AI problemChowdhury and Chung don't think those numbers are particularly credible. They feel we know nothing about what's going on inside commercial AI systems like ChatGPT or Gemini, because OpenAI and Google have never released actual power-consumption figures.They didnt publish any real numbers, any academic papers. The only number, 0.3 watts per Google search, appeared in some blog post or other PR-related thingy, Chodwhury said. We dont know how this power consumption was measured, on what hardware, or under what conditions, he said. But at least it came directly from Google.When you take that 10x Google vs ChatGPT equation or whateverone part is half-known, the other part is unknown, and then the division is done by some third party that has no relationship with Google nor with Open AI, Chowdhury said.Googles PR-related thingy was published back in 2009, while the 2.9-watts-per-ChatGPT-query figure was probably based on a comment about the number of GPUs needed to train GPT-4 made by Jensen Huang, Nvidias CEO, in 2024. That means the 10x AI versus non-AI search claim was actually based on power consumption achieved on entirely different generations of hardware separated by 15 years. But the number seemed plausible, so people keep repeating it, Chowdhury said.All reports we have today were done by third parties that are not affiliated with the companies building big AIs, and yet they arrive at weirdly specific numbers. They take numbers that are just estimates, then multiply those by a whole lot of other numbers and get back with statements like AI consumes more energy than Britain, or more than Africa, or something like that. The truth is they dont know that, Chowdhury said.He argues that better numbers would require benchmarking AI models using a formal testing procedure that could be verified through the peer-review process.As it turns out, the ML Energy Initiative defined just such a testing procedure and ran the benchmarks on any AI models they could get ahold of. The group then posted the results online on their ML.ENERGY Leaderboard.AI-efficiency leaderboardTo get good numbers, the first thing the ML Energy Initiative got rid of was the idea of estimating how power-hungry GPU chips are by using their thermal design power (TDP), which is basically their maximum power consumption. Using TDP was a bit like rating a cars efficiency based on how much fuel it burned running at full speed. Thats not how people usually drive, and thats not how GPUs work when running AI models. So Chung built ZeusMonitor, an all-in-one solution that measured GPU power consumption on the fly.For the tests, his team used setups with Nvidias A100 and H100 GPUs, the ones most commonly used at data centers today, and measured how much energy they used running various large language models (LLMs), diffusion models that generate pictures or videos based on text input, and many other types of AI systems.The largest LLM included in the leaderboard was Metas Llama 3.1 405B, an open-source chat-based AI with 405 billion parameters. It consumed 3352.92 joules of energy per request running on two H100 GPUs. Thats around 0.93 watt-hourssignificantly less than 2.9 watt-hours quoted for ChatGPT queries. These measurements confirmed the improvements in the energy efficiency of hardware. Mixtral 8x22B was the largest LLM the team managed to run on both Ampere and Hopper platforms. Running the model on two Ampere GPUs resulted in 0.32 watt-hours per request, compared to just 0.15 watt-hours on one Hopper GPU.What remains unknown, however, is the performance of proprietary models like GPT-4, Gemini, or Grok. The ML Energy Initiative team says it's very hard for the research community to start coming up with solutions to the energy efficiency problems when we dont even know what exactly were facing. We can make estimates, but Chung insists they need to be accompanied by error-bound analysis. We dont have anything like that today.The most pressing issue, according to Chung and Chowdhury, is the lack of transparency. Companies like Google or Open AI have no incentive to talk about power consumption. If anything, releasing actual numbers would harm them, Chowdhury said. But people should understand what is actually happening, so maybe we should somehow coax them into releasing some of those numbers.Where rubber meets the roadEnergy efficiency in data centers follows the trend similar to Moores lawonly working at a very large scale, instead of on a single chip, Nvidia's Harris said. The power consumption per rack, a unit used in data centers housing between 10 and 14 Nvidia GPUs, is going up, he said, but the performance-per-watt is getting better.When you consider all the innovations going on in software optimization, cooling systems, MEP (mechanical, electrical, and plumbing), and GPUs themselves, we have a lot of headroom, Harris said. He expects this large-scale variant of Moores law to keep going for quite some time, even without any radical changes in technology.There are also more revolutionary technologies looming on the horizon. The idea that drove companies like Nvidia to their current market status was the concept that you could offload certain tasks from the CPU to dedicated, purpose-built hardware. But now, even GPUs will probably use their own accelerators in the future. Neural nets and other parallel computation tasks could be implemented on photonic chips that use light instead of electrons to process information. Photonic computing devices are orders of magnitude more energy-efficient than the GPUs we have today and can run neural networks literally at the speed of light.Another innovation to look forward to is 2D semiconductors, which enable building incredibly small transistors and stacking them vertically, vastly improving the computation density possible within a given chip area. We are looking at a lot of these technologies, trying to assess where we can take them, Harris said. But where rubber really meets the road is how you deploy them at scale. Its probably a bit early to say where the future bang for buck will be.The problem is when we are making a resource more efficient, we simply end up using it more. It is a Jevons paradox, known since the beginnings of the industrial age. But will AI energy consumption increase so much that it causes an apocalypse? Chung doesn't think so. According to Chowdhury, if we run out of energy to power up our progress, we will simply slow down.But people have always been very good at finding the way, Chowdhury added.Jacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 21 Comments
    0 Reacties ·0 aandelen ·97 Views
  • CIOs Confront Cloud Budget Overruns With Smarter Cost Management
    www.informationweek.com
    Nathan Eddy, Freelance WriterMarch 24, 20255 Min ReadWavebreakmedia Ltd FUS1407 via Alamy StockCloud storage was once hailed as a cost-effective solution for businesses, but hidden fees and unpredictable costs are causing widespread financial strain.More than half of businesses globally have experienced IT or business delays due to unexpected cloud storage expenses, and 62% of organizations exceeded their cloud budgets last year, according to a report from Wasabia.As chief information officers and IT leaders reassess cloud spending, many are looking for new strategies to prevent waste, improve forecasting, and better manage their data storage policies.Soumya Gangopadhyay, technology strategist at EY, points to a lack of financial transparency and poor forecasting as key reasons why cloud costs spiral out of control. Certain issues arise when organizations dont track IT costs in a way that enables breaking out expenses to support analysis or forecasts, he says. Data egress fees, complex storage tiering, and sudden spikes in data processing all contribute to budget overruns.He cautions that without clear visibility into usage and cost structures, companies would struggle to predict expenses, leading to unforeseen financial burdens.Egress Fees, Over-Provisioning Drive Up CostsOne of the biggest financial pitfalls in cloud storage is egress fees, the costs incurred when transferring data out of a cloud providers ecosystem.Related:These fees, often overlooked in budgeting, can add up quickly and disrupt IT operations.Will Milewski, senior vice president of cloud infrastructure and operations at Hyland, notes businesses frequently underestimate the impact of egress fees. With regulatory shifts like the European Data Act prompting major providers to adjust these fees, organizations are still challenged by unanticipated usage that drives up costs, he says via email.He explains IT leaders can mitigate these impacts by consolidating data within a single ecosystem, employing intelligent tiering strategies, and utilizing data compression or deduplication techniques.Beyond egress fees, companies are also over-provisioning cloud resources, paying for storage they dont fully utilize.Many organizations, eager to embrace cloud agility, end up spending more than necessary due to a lack of integrated visibility across their data assets.Cost overruns often stem from over-provisioning, unpredictable data growth, and the complexity of managing diverse data workloads, Milewski says. By leveraging unified platforms, companies can streamline workflows, improve forecasting, and right-size storage needs.Related:The Challenge of Cloud Cost TransparencyWhile cloud providers offer cost management tools, many organizations find pricing models too complex to navigate effectively.Gangopadhyay explains some cloud providers obscure costs through complicated pricing structures, making it difficult for IT teams to plan accordingly. Not all providers offer robust tools for forecasting costs based on usage patterns, which is another factor organizations should consider when working with a cloud provider, he says.Milewski echoes this concern, pointing out that cloud providers are offering more AI-driven cost management tools, but expertise is required to use them effectively. Were seeing cloud providers introduce reserved pricing models, savings plans, and AI-driven cost dashboards, he says. However, many pricing structures remain complex, requiring organizations to build in-house expertise or partner with specialized vendors.Without dedicated cost management teams or external partners, businesses often struggle to fully optimize cloud spending.IT Leaders Take Control of Cloud CostsCIOs and IT leaders can execute several proactive measures as they look to regain control of their cloud budgets.Related:Gangopadhyay suggests implementing real-time monitoring tools, resource tagging taxonomies, and predictive analytics to improve cost forecasting. Organizations need to have a clear understanding or adherence to existing capabilities and performance -- without it, engineering workload performance can be a challenge, he says.By leveraging historical data and automating governance policies, businesses can eliminate waste and prevent unexpected cost spikes.Milewski advises companies to audit their storage policies and shift to a more strategic, tiered approach. Optimizing storage begins with aligning data policies to actual usage, he says. Prioritizing high-performance tiers for critical content while shifting less-accessed data to real solutions ensures cost efficiency without compromising performance or compliance.He also highlights automation and AI-driven insights as key tools for identifying redundancies and reducing expenses.Another crucial step is building a chargeback model that aligns IT costs with business strategy.Gangopadhyay says he believes organizations should implement chargeback mechanisms that assign storage costs to individual business units, making cloud expenses more transparent.Developing an enterprise chargeback strategy ensures that cloud spending is directly tied to business objectives, he says.By making business units accountable for their storage usage, companies can drive more responsible cloud consumption.The Future of Cloud Cost ManagementAs cloud storage pricing evolves, IT leaders must stay ahead of emerging trends to keep costs under control.Gangopadhyay says he expects increased competition among cloud providers, which could lead to more dynamic pricing models. We can expect to see more providers adopting real-time usage-based pricing and offering incentives for eco-friendly storage options, he says.Companies that embrace flexible budgeting practices and sustainable cloud solutions will be better positioned to navigate shifting cost structures.Milewski predicts that AI and automation will play a bigger role in optimizing cloud spending.The cloud storage landscape is evolving toward more dynamic, consumption-based pricing models, he says. Businesses will need to embrace FinOps practices, leveraging advanced analytics and automated tools, to adapt to these trends.FinOps, or cloud financial management, is becoming increasingly critical for organizations aiming to turn unpredictable expenses into predictable, manageable investments.Gangopadhyay stresses the key to reducing waste is aligning cloud costs with business goals.Reducing cloud expenses comes down to aligning business goals with business costs, he says. Organizations can better identify and eliminate unnecessary or redundant data by implementing automated policies, conducting regular audits, and establishing clear retention guidelines.Milewski underscores the importance of staying ahead of pricing trends and investing in cost optimization strategies.By leveraging automation, real-time monitoring, and AI-driven insights, companies can ensure that their cloud investments remain both strategic and cost-efficient.Businesses that combine modern infrastructure with intelligent cost management can empower themselves to navigate future challenges effectively, he says.About the AuthorNathan EddyFreelance WriterNathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.See more from Nathan EddyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Reacties ·0 aandelen ·107 Views
  • Water can turn into a superacid that makes diamonds
    www.newscientist.com
    Superacids can turn carbon molecules into diamondsSefa Kart/AlamyWater may transform into a superacidic fluid under extreme heat and pressure. These conditions are found only in Earths interior, within icy planets like Uranus and Neptune and possibly in controlled laboratory experiments.Under immense pressures and temperatures, water exhibits a remarkable property it becomes an exceptionally potent acid, also known as a superacid, which can be billions or even trillions of times stronger than sulphuric acid, says Flavio Siro Brigiano at Sorbonne University in France.This
    0 Reacties ·0 aandelen ·92 Views
  • Greenland has gained over 1600 km of new coastline as glaciers retreat
    www.newscientist.com
    A warning sign in Greenland about tsunamis caused by icebergsODD ANDERSEN/AFP via Getty ImagesRising temperatures in the Arctic are causing glaciers to retreat onto land, exposing thousands of kilometres of coastline in Greenland and other areas with potential geopolitical consequences.Jan Kavan at the University of South Bohemia in the Czech Republic and his colleagues used satellite imagery taken in 2000 and 2020 to track changes in marine-terminating glaciers across the northern hemisphere. They found that almost 2500 kilometres of new coastline had emerged over this period due to glaciers retreating onto land, with two-thirds of this
    0 Reacties ·0 aandelen ·93 Views