• WWW.MARKTECHPOST.COM
    HBI V2: A Flexible AI Framework that Elevates Video-Language Learning with a MultivariateCo-Operative Game
    Video-Language Representation Learning is a crucial subfield of multi-modal representation learning that focuses on the relationship between videos and their associated textual descriptions. Its applications are explored in numerous areas, from question answering and text retrieval to summarization. In this regard ,contrastive learning has emerged as a powerful technique that elevates video-language learning by enabling networks to learn discriminative representations. Here, global semantic interactions between predefined video-text pairs are utilized for learning.One big issue with this method is that it undermines the models quality on downstream tasks. These models typically use video-text semantics to perform coarse-grained feature alignment. Contrastive Video Models are, therefore, unable to align fine-tuned annotations that capture the subtleties and interpretability of the video. The nave approach to solving this problem of fine-grained annotation would be to create a massive dataset of high-quality annotations, which is unfortunately unavailable, especially for vision-language models. This article discusses the latest research that solves the problem of fine-grained alignment through a game.Peking University and Pengcheng Laboratory researchers introduced a hierarchical Banzhaf Interaction approach to solve alignment issues in General Video-Language representation learning by modeling it as a multivariate cooperative game. The authors designed this game with video and text formulated as players. For this purpose, they grouped the collection of multiple representations as a coalition and used Banzhaf Interaction, a game-theoretic interaction index, to measure the degree of cooperation between coalition members.The research team extends upon their conference paper on a learning framework with a Hierarchical Banzhaf Interaction, where they leveraged cross-modality semantics measurement as functional characteristics of players in the video-text cooperative game. In this paper, the authors propose HBI V2, which leverages single-modal and cross-modal representations to mitigate the biases in the Banzhaf Index and enhance video-language learning. In HBI V2, the authors reconstruct the representations for the game by integrating single and cross-modal representations, which are dynamically weighted to ensure fine granularity from individual representations while preserving the cross-modal interactions.Regarding impact, HBI V2 surpasses HBI with its capability to perform various downstream tasks, from text-video retrieval to VideoQA and video captioning. To achieve this, the authors modified their previous structure into a flexible encoder-decoder framework, where the decoder is adapted for specific tasks.This framework of HBI V2 is divided into three submodules: Representation-Reconstruction, the HBI Module, and Task-Specific Prediction Heads. The first module facilitates the fusion of single and cross-modal components. The research team used CLIP to generate both representations. For video input, frame sequences are encoded into embeddings with ViT. This component integration helped overcome the problems of dynamically encoding video while preserving inherent granularity and adaptability. For the HBI module, the authors modeled video text as players in a multivariate cooperative game to handle the uncertainty during fine-grained interactions. The first two modules provide flexibility to the framework, enabling the third module to be tailored for a given task without requiring sophisticated multi-modal fusion or reasoning stages.In the paper, HBI V2 was evaluated on various text-video retrieval, video QA, and video captioning datasets with the help of multiple suitable metrics for each. Surprisingly, the proposed method outperformed its predecessor and all other methods on all the downstream tasks. Additionally, the framework achieved notable advancements over HBI on the MSVD-QA and ActivityNet-QA datasets, which assessed its question-answering abilities. Regarding reproducibility and inference, the inference time was 1 second for the whole test data.Conclusion: The proposed method uniquely and effectively utilized Banzhaf Interaction to provide fine-grained labels for a video-text relationship without manual annotations. HBI V2 extended upon the preceding HBI to infuse the granularities of single representation into cross-modal representations. This framework exhibited superiority and the flexibility to perform various downstream tasks.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Adeeba Alam Ansari+ postsAdeeba Alam Ansari is currently pursuing her Dual Degree at the Indian Institute of Technology (IIT) Kharagpur, earning a B.Tech in Industrial Engineering and an M.Tech in Financial Engineering. With a keen interest in machine learning and artificial intelligence, she is an avid reader and an inquisitive individual. Adeeba firmly believes in the power of technology to empower society and promote welfare through innovative solutions driven by empathy and a deep understanding of real-world challenges. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)
    0 Комментарии 0 Поделились 135 Просмотры
  • TOWARDSAI.NET
    Going Beyond the 1000-Layer Convolution Network
    Author(s): Bartosz Ludwiczuk Originally published on Towards AI. Introduction Vanishing gradient issue Mitigation of the vanishing gradient issue Training 1000 layer network Training component analysis Diving Deeper into Skip Connections 10000-layer networkMean gradient for 1st layer in all experimentsIntroductionOne of the largest Convolutional Networks, ConvNext-XXLarge[1] from OpenCLIP[2], boasts approximately 850 million parameters and 120 layers (counting all convolutional and linear layers). This is a dramatic increase compared to the 8 layers of AlexNet[3] but still fewer than the 1001-layer experiment introduced in the PreResNet[4] paper.Interestingly, about a decade ago, training networks with more than 100 layers was considered nearly impossible due to the vanishing gradient problem. However, advancements such as improved activation functions, normalization layers, and skip connections have significantly mitigated this issue or so it seems. But is the problem truly solved?In this blog post, I will explore:What components enable training neural networks with more than 1,000 layers?Is it possible to train a 10,000-layer Convolutional Neural Network successfully?Vanishing gradient issueBefore diving into experiments, lets briefly revisit the vanishing gradient problem, a challenge that many sources have already explored in detail.The vanishing gradient problem occurs when the gradients in the early layers of a neural network become extremely small, effectively halting their ability to learn useful features. This issue arises due to the chain rule used during backpropagation, where the gradient is propagated backward from the final layer to the first. If the gradient in any layer is close to zero, the gradients for preceding layers shrink exponentially. A major cause of this behavior is the saturation of activation functions.To illustrate this, I trained a simple 5-layer network using the sigmoid activation function, which is particularly prone to saturation. You can find the code for this experiment on GitHub. The goal was to observe how the gradient norms of the networks weights evolve over time.Gradient Norms Per Layer (Vanishing Gradient Issue). FC5 is the top layer, FC1 is the first layer. Image by authorThe plot above shows the gradient norms for each linear layer over several training iterations. FC5 represents the final layer, while FC1 represents the first.Vanishing Gradient Problem:In the first training iteration, theres a huge difference in gradient norms between FC5 and FC4, with FC4 being approximately 10x smaller.By the time we reach FC1, the gradient is reduced by a factor of ~10,000 compared to FC5, leaving almost nothing of the original gradient to update the weights.This is a textbook example of the vanishing gradient problem, primarily driven by activation function saturation.Sigmoid activation function and its gradient. Plot add preactivation and activations/gradient values. Image by authorLets delve deeper into the root cause: the sigmoid activation function. To understand its impact, I analyzed the first layer's pre-activation values (inputs to the sigmoid). The findings:Most pre-activation values lie in the flat regions of the sigmoid curve, resulting in activations close to 0 or 1.In these regions, the sigmoid gradient is nearly zero, as shown in the plot above.This means that any gradient passed backward through these layers is severely diminished, effectively disappearing by the time it reaches the first layers.The maximum gradient of the sigmoid function is 0.25, achieved at the midpoint of the curve. Even under ideal conditions, with 5 layers, the maximum gradient diminishes to 0.25 1e-3. This reduction becomes catastrophic for networks with 1,000 layers, rendering negligible the first layers' gradients.Skip connection. Source: Deep Residual Learning for Image Recognition, Kaiming HeMitigation of the vanishing gradient issueSeveral advancements have been instrumental in addressing the vanishing gradient problem, making it possible to train very deep neural networks. The key components that contribute to this solution are:1. Activation Functions (e.g., Tanh, ReLU, GeLU)Modern activation functions have been designed to mitigate vanishing gradients by offering higher maximum gradient values and reducing regions where the gradient is zero. For example:ReLU (Rectified Linear Unit) has a maximum gradient of 1.0 and eliminates the saturation problem for positive inputs. This ensures gradients remain significant during backpropagation.Other functions, such as GeLU[5] and Swish[6], smooth out the gradient landscape, further improving training stability.2. Normalization Techniques (e.g., BatchNorm[7], LayerNorm[8])Normalization layers play a crucial role by adjusting pre-activation values to have a mean close to zero and a consistent variance. This helps in two significant ways:It reduces the likelihood of pre-activation values entering the saturation regions of activation functions, where gradients are nearly zero.Normalization ensures more stable training by keeping the activations well-distributed across layers.For instance:BatchNorm[7] normalizes the input to each layer based on the batch statistics during training.LayerNorm[8] normalizes across features for each sample, making it more effective in some scenarios.3. Skip Connections (Residual Connections)Skip connections, introduced in architectures like ResNet[9], allow input signals to bypass one or more intermediate layers by directly adding the input to the layer's output. This mechanism addresses the vanishing gradient problem by:Providing a direct pathway for gradients to flow back to earlier layers without being multiplied by small derivatives or passed through saturating activation functions.Preserving gradients even in very deep networks, ensuring effective learning for earlier layers.By avoiding multiplications or transformations in the skip path, gradients remain intact, making them a simple yet powerful tool for training ultra-deep networks.Skip connection equation. Image by authorTraining 1000 layer networkFor this experiment, all training was conducted on the CIFAR-10[10] dataset. The baseline architecture was ConvNext[1], chosen for its scalability and effectiveness in modern vision tasks. To define successful convergence, I used a validation accuracy of >50% (compared to the 10% accuracy of random guessing). Source code on GitHub. All runs are available at Wandb.The following parameters were used across all experiments:Batch size: 64Optimizer: AdamW[11]Learning rate scheduler: OneCycleLRMy primary objective was to replicate the findings of the PreResNet paper and investigate how adding more layers impacts training. Starting with a 26-layer network as the baseline, I gradually increased the number of layers, ultimately reaching 1,004 layers.Throughout the training process, I collected statistics on the mean absolute gradient of the first convolutional layer. This allowed me to evaluate how effectively gradients propagated back through the network as the depth increased.Training 1k layer experiments. Image by authorGradient plot for all experiments. Despite the depth, gradient at the first layer are similar in each run. Image by authorKey ObservationsDespite increasing the depth to 1,000 layers, the networks successfully converged, consistently achieving the validation accuracy threshold (>50%).The mean absolute gradient of the first layer remained sufficiently large across all tested depths, indicating effective gradient propagation even in the deepest networks.The scores of ~94% are weak as SOTA is ~99%. I couldnt get better scores, leaving space for the next investigations.Training component analysisBefore diving deeper into ultra-deep networks, its crucial to identify which components most significantly impact the ability to train a 1000-layer network. The candidates are:Activation functionsNormalization layersSkip connectionsTraining component analysis experiments. Image by authorGradient plot for training component analysis experiments. . Image by authorSkip Connections: The Clear WinnerAmong all components, skip connections stand out as the most critical factor. Without skip connections, no other modifications advanced activation functions or normalization techniques can sustain training for such deep networks. This confirms that skip connections are the cornerstone of vanishing gradient mitigation.Activation Functions: Sigmoid and Tanh Still CompetitiveSurprisingly, the performance of Sigmoid and Tanh activation functions was competitive with modern alternatives like GeLU when accompanied by the normalization layer, and even without LayerNorm Sigmoid got a competitive score compared to GELU without LayerNorm. As we see, the mean gradient for all experiments is quite similar, with TanH without LayerNorm having the highest mean value but at the same time the lowest accuracy.Mean Gradient ValuesThe mean gradient values are relatively consistent across experiments, but the gradient trajectories differ. In experiments with LayerNorm, gradients initially rise to approximately 0.5 early in training before steadily declining. In contrast, experiments without LayerNorm exhibit a nearly constant gradient throughout the training process. Importantly, the gradient remains present in all cases, with no evidence of vanishing gradients in the networks first layer.Diving Deeper into Skip ConnectionsSkip connections can be implemented in various ways, with the main difference being how the raw input and transformed output are merged, often controlled by a learnable scaling factor . In ConvNext, for instance, the LayerScale[12] trick is employed, where the transformed data is scaled by a small learnable , initialized to 1e-6.This setup has a profound implication:During the initial training stages, most information flows through the skip connections, as the contribution from the transformation branch (via matrix multiplication and activation functions) is minimal.As a result, the vanishing gradient issue is effectively bypassed.Skip connection in ConvNext, with symbol. Image by authorExperiment: Varying LayerScale InitializationTo test whether the initialization of plays a critical role, I experimented with different starting values for LayerScale. Below is a diagram of a typical skip connection and a table summarizing the results:Skip connection scale analysis experiments. Image by authorThe results show that even with initialized to 1 (effectively turning on all transformation branches from the start), training a 1000-layer network remained stable. This suggests that while different versions of skip connections may vary slightly in their implementation, all are equally effective at mitigating the vanishing gradient problem.> 1000-layer networkSince weve established that skip connections are the key to training very deep networks, lets push the limits further by experimenting with even deeper architectures. To do this, I will gradually increase the network depth, but deeper networks require significantly more computational resources. Therefore, Ive decided to fit the largest possible network that can run on an RTX 4090 with 24 GB of memory.Fitting the biggest possible network on 24 GB. Image by author.The 1607-layer ConvNext was the biggest one I could fit into a GPU memory. There is still no issue with convergence, and the CIFAR10 results are the same.SummaryTo sum up a key finding:the skip connection is a main vanishing gradient mitigation toolTanh/Sigmoid are competitive to GELU when used with skip-connection and LayerNorm. It means that despite flat gradient areas Tanh/Sigmoid works well when accompanied by Skip-Connection and LayerNormwith skip-connection, you can try any depth you want, only resources constrain you, no matter what activation function you chooseIf anybody does not agree with that thesis during the recruitment process, send the link to my blog post as my experiment shows clear evidence![1] A ConvNet for the 2020s, Zhuang Liu, CVPR 2022[3] ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, NIPS 2012[4] Identity Mappings in Deep Residual Networks, Kaiming He, ECCV 2016[5] Gaussian Error Linear Units, Dan Hendrycks, 2016[6] Searching for Activation Functions, Prajit Ramachandran, ICLR 2018[7] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, Sergey Ioffe, ICML 2015[8] Layer Normalization, Jimmy Lei Ba, 2016[9] Deep Residual Learning for Image Recognition, Kaiming He, CVPR 2016[10] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009[11] Decoupled Weight Decay Regularization, Ilya Loshchilov, ICLR 2019[12] Going deeper with Image Transformers, Hugo Touvron, ICCV 2021Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Комментарии 0 Поделились 156 Просмотры
  • WWW.IGN.COM
    Nvidia Geforce RTX 5080: Specs, Release Date and What We Know So Far
    After months of agonizing anticipation, Nvidia has finally announced the RTX 5080, along with the rest of the Blackwell lineup, including the RTX 5090, RTX 5070 Ti and RTX 5070 at CES 2025. We'll finally be able to get our hands on the next-generation graphics card on January 30. Until then, Nvidia has revealed the full specs of the card so we can get a rough idea of what to expect when it makes its way into a gaming PC near you. Nvidia RTX 5080 release dateThe Nvidia GeForce RTX 5080 launches January 30, 2025, along with its bigger sibling, the RTX 5090. Nvidia has also announced the RTX 5070 and RTX 5070 Ti, though those don't have a definite release date though we can expect them by March. As for the laptop version of the RTX 5080, Nvidia claims availability will 'start in March', though that is going to largely depend on the laptop manufacturers. it could be April before we see the likes of Alienware, MSI and Asus work the RTX 5080 into their next-generation laptops. Nvidia RTX 5080 priceWhen Nvidia unveiled the RTX 5080, it revealed a starting price of $999 for the RTX 5080, with third party cards likely being much more expensive, depending on how fancy their coolers and features are. While I don't know how likely it'll be to get an RTX 5080 for $999 when it hits the street, it is a significantly lower launch price than the RTX 4080, which launched for $1,199 back in 2022. That's surprising, when you consider the RTX 5090 saw a price jump from $1,599 to $1,99 also a starting price. As for the lower-tier cards, the RTX 5070 Ti will start at $749, with the RTX 5070 starting at $549. Getting a gaming laptop with an RTX 5080 is going to be quite a bit more expensive, of course, as you're buying an entire system instead of a single component. During the keynote at CES, Nvidia claims systems will start at $2,199, with more premium systems likely getting a substantial price bump. With these gaming laptops, though, keep in mind that they'll be much less performant than the equivalent desktop GPU. My general rule of thumb, without seeing testing, is that the laptop GPU is the equivalent of two tiers down. So, for instance, the RTX 5090 mobile will likely perform at the level of the desktop RTX 5070, with the RTX 5080 likely matching a desktop RTX 5060 even if that hasn't even been announced yet. Nvidia RTX 5080 specsThe Nvidia GeForce RTX 5080 is built on the Blackwell architecture that Nvidia's been using to power Supercomputers for the past year or so. While I'm not lucky enough to to test a data center GPU in Cyberpunk, Nvidia is making some lofty claims about the performance of this architecture, especially when it comes to AI performance, which is important for upscaling in modern PC games. This graphics card sports 10,752 CUDA cores across 84 Streaming Multiprocessors (SMs). That's actually a raw increase over the RTX 4080, which only sported 9,728 shaders. Assuming each Blackwell-based CUDA core has a significant IPC improvement over their last-gen counterparts, this increase in cores could mean significantly better performance. Of course, each SM has more than just CUDA cores. Nvidia hasn't released the chip layout, but assuming Blackwell has a similar layout to Ada Lovelace, each SM should have 4 Tensor Cores, which would make for a total of 336 Tensor Cores. Each SM also features a RT Core, which powers Ray Tracing. Nvidia is claiming a theoretical 1,801 TOPS of AI performance through the Tensor Cores and 171 Teraflops of ray tracing performance through the RT cores. Finally, the RTX 5080 sports 16GB of GDDR7 memory on a 256-bit bus. Because the RTX 5000 series are the first graphics cards to ever use GDDR7, I have no idea what impact this will have on performance, but it should be much faster than the GDDR6X on the RTX 4080 though only time will tell. Nvidia RTX 5080 performanceWhen Jensen Huang took the stage at CES 2025 with his flashy new jacket, he made some lofty claims about RTX 5090 performance, and even claimed that the RTX 5070 would match the RTX 4090. he supported these claims with benchmarks using the new DLSS 4, which coincidentally won't run on RTX 4000 cards, so you should take them with a grain of salt. The truth of the matter is that I have no idea how fast these graphics cards are, and I won't have a clear picture until I get them in the lab to actually test them in a controlled setting. Nvidia also made really lofty claims of gen-on-gen performance when it launched the RTX 4080, and that didn't turn out so well for Team Green. Luckily, with the RTX 5080 launching on January 30, we won't have to wait long to see what they have in store. Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC components queen. You can follow her @Jackiecobra
    0 Комментарии 0 Поделились 136 Просмотры
  • WWW.IGN.COM
    Hands-On with the Lenovo Legion Go S: CES 2025
    When Lenovo announced the Legion Go S earlier today at CES, I thought it was just a lightweight version of the existing behemoth of a gaming handheld, and in many ways, the Windows 11 version is exactly that. However, the Lenovo Legion Go S is also available with SteamOS, which makes it $100 cheaper and so much easier to use. It gives us a glimpse of SteamOS's future and how it could become a serious threat to Windows especially for handheld gaming PCs and gaming laptops. Lenovo Legion Go S Hands-on PhotosDesignFrom the images I saw in the press release, I thought the Lenovo Legion Go S would be much smaller than the original Legion Go. I was wrong. The Lenovo Legion Go S feels about the same in my hands as the Asus ROG Ally X, which is currently the best handheld gaming PC on the market right now. Even though it sports a big screen, it still feels comfortable, especially without the knobs and dials of the original device. Instead, the sides of the Lenovo Legion Go S are smooth and rounded, contouring nicely in the hand, and the hatched texture on the grips of the device will probably help prevent accidental drops. The rear side of those grips hides the only "extra" buttons on the device, two paddle-like buttons, one on either side of the device. That's a stark contrast from the Lenovo Legion Go, which had a ton of extra buttons and dials, as the removable controllers were supposed to be used as a stand-in for a mouse. Luckily, the Lenovo Legion Go S retains the touchpad on the front of the device, even if it shrinks it down considerably. On the Windows 11 version of the device, it allows you to navigate the OS easily, though it was disabled on the SteamOS version that I played around with. A Valve representitive told me that a fix is in the works, and the little trackpad should be functional when the handhelds make it to market later this month. Also on the front of the device, of course, are the face buttons present on any handheld gaming PC. These all feel nice and tactile, and the analog sticks also have RGB lighting surrounding them another thing Valve had to build support for in SteamOS for the Go S. But the menu buttons are surprisingly the star of the show. Like with any other handheld out there, there are four menu buttons in total, two on each side of the display. The top button on each side functions as the start button on the right and the 'select' button on the left. Beneath those are the menu buttons that call up either Steam or a quick settings panel. Unlike other handhelds, though, it was incredibly smooth and responsive, with the menus coming up immediately, where something like the ROG Ally might make me wait a second to bring up Armoury Crate if it even opens Armoury Crate to begin with. Lenovo Legion Go S ImagesOn the top of the device, you'll find an outtake vent that spits out hot air, that stretches between the two triggers. Luckily, the vent doesn't take up the entire height of the device, with half of that stretch being dedicated to the power button, headphone jack and two USB-C ports. The Lenovo Legion Go S display is an 8" 1200p LCD panel with a 120Hz framerate, and it is gorgeous. It's big enough that you'll clearly see anything you're playing and bright enough to use at least in the brightly lit demo room at CES 2025. It marks probably the biggest improvement over the Steam Deck, as Valve's handheld is still limited to an 800p display.PerformanceBoth the Windows 11 and SteamOS version of the device are powered either by the recently announced Z2 Go or the current-generation Z1 Extreme. Obviously, I'll need to test it through a suite of games to get a clear picture of its gaming performance, but the games I did play on it had a high frame rate (admittedly Lenovo didn't exactly stock the thing with the most demanding games). Beyond the APU at the core of the device, the Lenovo Legion Go S also sports 'up to' 32GB of LPDDR5X RAM and 'up to' a 1TB SSD though I'm not sure how much memory or storage was on the device I actually used at the event. Again, another reason to wait for full reviews before you commit to this handheld. While I don't yet have a clear picture of the real-world performance of Lenovo Legion Go S, I'm optimistic, especially given the affordable $599 price. Price and AvailabilityAs far as the Lenovo Legion Go S release date, right now there are two different launch windows: The high-end spec with the Z1 Extreme will be available later in January running Windows 11 for $729, and the version with the Z2 Go will be available in May, which will cost $599 for the Windows 11 version, and $499 for the SteamOS version. Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC components queen. You can follow her @Jackiecobra
    0 Комментарии 0 Поделились 135 Просмотры
  • WWW.IGN.COM
    SteamOS Will Soon Be Available on Non-Steam Deck Handhelds
    Valve will begin allowing third-party handhelds to license SteamOS starting with Lenovo's new Legion Go S.At CES 2025, Lenovo announced that it will ship a version of its upcoming Legion Go S handheld called the "Powered by SteamOS" that will run, you guessed it, SteamOS. This is the same OS that powers Valve's Steam Deck, which is pretty good if you're primarily someone who owns all their PC games on Steam.As part of the launch, Valve published a blog that confirmed that alongside the Legion Go S, Valve is working on "a beta of SteamOS which should improve the epxerience on other handhelds and users can download and thest this themselves." Meaning that SteamOS should be available on non-Steam Deck handheld gaming PCs as well.The benefits of SteamOS, especially for those with massive Steam libraries, is that of the handheld PC interfaces, SteamOS is very smooth and easy to navigate. While not mandatory for handheld PC gaming, it has its fans and soon even non-Steam Deck owners can install it for their own handhelds.IGN is at CES 2025 where we already previewed the Legion Go S, a lighter version of Lenovo's 2023 handheld the Legion Go. It comes in a lighter formfactor, but with some improvements under the hood. It is not, however, a next-gen handheld. That's the Lenovo Go 2, which will utilize the AMD Z2 Extreme APU and is currently in the prototype phase.Be sure to stick around for all our updates from CES and let us know if you're excited to install SteamOS to your non-Steam Deck handheld.Matt Kim is IGN's Senior Features Editor.
    0 Комментарии 0 Поделились 140 Просмотры
  • 9TO5MAC.COM
    iOS 18.3 beta 2 fixes key Calculator issue introduced last fall
    Apple has just released iOS 18.3 beta 2. While so far we havent seen a ton of changes in iOS 18.3, the latest beta does bring one fix for the Calculator app that has been asked for often since iOS 18s redesign.Repeating operations are now possible again following iOS 18 removalWhen Apple released iOS and iPadOS 18 in September, the updates came with big and small changes to a wide variety of system apps.Calculator was among them.Apple brought Calculator to iPad for the first time, but it also took the opportunity to make some changes to the app on iPhone.Not all of those changes have been well received though.One negative change in particular was the removal of support for repeating operations.Previously, when making a calculation, you could hit the equals button repeatedly to perform further calculations.For example, if you entered 5 x 8 and hit equals, youd see the answer, 40. But you could then hit equals again to have 40 multiplied by 8 again. That 320 answer could then be multiplied by 8 again with another hit of the equals button.iOS 18 removed that behavior, but now its been restored in iOS 18.3 beta 2.You can hit the equals button over and over to perform new operations to your hearts content.Were you missing this Calculator feature? What other iOS 18 changes need to be addressed? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии 0 Поделились 147 Просмотры
  • WWW.CNET.COM
    Check Out These Mind-Blowing Concept Products From CES 2025
    Check Out These Mind-Blowing Concept Products From CES 2025 You can't buy these cool concepts yet, but they might eventually land in a store near you. TCL AI Me, pronounced "Amy," is a concept for a robot companion. Bridget Carey Never fall in love with a concept product. Whether it's a futuristic car, a sleek TV or a Jetsons-level toy, it's a long shot as to whether the company will ever deliver. CES 2025is loaded with will-they, won't-they technologies, from the brightest smartwatch our experts have even seen to a AI-powered robot companion that looks a little like Baby Yoda in his hover-pram. Read on to find out which CES concepts we're dreaming about.CNET is covering CNET 2025 in Las Vegas -- not just the concept products shown here, but tech items that are either already in stores, or on their way. Here's a look at all the new products that have grabbed our attention, and here's a guide to TVs we saw at CES. Don't miss the $7,000 coffee-making robot, the printer-sized gadget that can boost your phone's battery life in seconds and the battery that's like a Tesla Powerwall for your fridge.Here come those CES concepts -- it may or may not be the last time you ever see them. CES 2025: Amazing Futuristic Tech Concepts to Look Forward To See all photos Computing Guides Laptops Desktops & Monitors Computer Accessories Photography Tablets & E-Readers 3D Printers
    0 Комментарии 0 Поделились 138 Просмотры
  • WWW.CNET.COM
    CES 2025: Amazing Futuristic Tech Concepts to Look Forward To
    The world's biggest consumer tech show lets us imagine a future where these zany dreams come to pass.
    0 Комментарии 0 Поделились 139 Просмотры
  • WWW.CNET.COM
    Have a Capital One Credit Card? The Discover Merger Could Change How You Pay
    Capital One announced plans last year to acquire Discover in 2025 for $35.3 billion. The upcoming acquisition isn't final but could spell changes for Capital One cardholders currently on Visa or Mastercard networks."Capital One has said that it will transition its cards from the Visa and Mastercard payment networks to the Discover card network," said Jason Steele, a CNET Money expert review board member and credit card expert.If you're a Capital One cardholder, this could limit where you can use your credit card.How a Capital One payment network change could impact youDiscover is one of the four major credit card networks, along with Visa, Mastercard and American Express. Right now, Capital One offers both Visa and Mastercard credit cards but would shift to Discover if the acquisition goes through.Why does this matter? Discover's network is smaller than Visa and Mastercard and isn't accepted everywhere in the US or abroad."While many countries where Discover isn't widely accepted are places that aren't popular tourist destinations, some are," Steele said. For example, Discover isn't accepted in Panama, Kenya and Serbia, according to its website. In countries where Discover is accepted, the rules can vary by store.Steele also added that Discover isn't currently accepted at all major US retailers, such as Costco.How to prepare for the Capital One and Discover mergerIf you depend on your Capital One credit card as your primary payment method, you may want to look for a backup Visa or Mastercard option -- especially when traveling abroad. This will ensure you're able to cover a payment if a merchant doesn't accept cards on the Discover network.Although this acquisition isn't a guarantee, experts expect it will pass. If the merger goes through, keep your eye out for updates from either Discover or Capital One about any changes. More credit card advice
    0 Комментарии 0 Поделились 129 Просмотры
  • WWW.YAHOO.COM
    Facebook lifts restrictions on calling women property and transgender people freaks
    Nous, Yahoo, faisons partie de la famille de marques Yahoo. Lorsque vous utilisez nos sites et applications, nous utilisons des cookies pour:vous fournir nos sites et applications;authentifier les utilisateurs, appliquer des mesures de scurit, empcher les spams et les abus; etmesurer votre utilisation de nos sites et applications. Si vous cliquez sur Accepter tout, nos partenaires, dont 238 font partie du Cadre de transparence de consentement de lIAB Europe, et nous-mme stockerons et/ou utiliserons galement des informations sur un appareil (en dautres termes, utiliserons des cookies), et nous servirons des donnes de golocalisation prcise et dautres donnes personnelles telles que ladresseIP et les donnes de navigation et de recherche, pour fournir des publicits et des contenus personnaliss, mesurer les publicits et les contenus, tudier les audiences et dvelopper des services. Si vous ne souhaitez pas que nos partenaires et nousmmes utilisions des cookies et vos donnes personnelles pour ces motifs supplmentaires, cliquez sur Refuser tout. Si vous souhaitez personnaliser vos choix, cliquez sur Grer les paramtres de confidentialit. Vous pouvez modifier vos choix tout moment en cliquant sur les liens Paramtres de confidentialit et des cookies ou Tableau de bord sur la confidentialit prsents sur nos sites et dans nos applications. Dcouvrez comment nous utilisons vos donnes personnelles dans notre Politique de confidentialit et notre Politique relative aux cookies.
    0 Комментарии 0 Поделились 135 Просмотры