TechSpot
TechSpot
Tech Enthusiasts - Power Users - IT Professionals - Gamers
1 Les gens qui ont lié ça
763 Articles
2 Photos
0 Vidéos
0 Aperçu
Mises à jour récentes
  • US cyber defense agency urges developers to eliminate buffer overflow vulnerabilities
    www.techspot.com
    Bottom line: The US Cybersecurity and Infrastructure Security Agency is once again reminding IT manufacturers and developers that buffer overflow vulnerabilities must be eradicated from software. In short, companies need to adopt a "secure by design" policy and fast. CISA has issued a new alert about buffer overflow vulnerabilities, urging the software industry to adopt proper programming practices to eliminate an entire class of dangerous security flaws. Buffer overflow exploits frequently lead to system compromise, CISA warns, posing significant threats to system reliability, data integrity, and overall cybersecurity.A buffer overflow occurs when a threat actor can access or write data outside a program's allocated memory space, CISA explained. If hackers manipulate memory beyond a buffer's allocated limits, it can lead to data corruption, exposure of sensitive information, system crashes, or even remote execution of malicious code.CISA previously warned about buffer overflow vulnerabilities and is now reiterating its message. The agency highlights real-world examples of these flaws, including vulnerabilities in Windows operating systems (CVE-2025-21333), the Linux kernel (CVE-2022-0185), VPN products (CVE-2023-6549), and various other software environments where executable code is present.Software companies can combat the buffer overflow threat by adopting a proper "secure by design" approach when writing their code. In software engineering, "secure by design" means that products and features are built with security as a foundational principle rather than added as an afterthought. However, CISA noted that only a few companies have implemented this approach so far.The agency outlined several "secure by design" practices that technical leads should adopt within their organizations. These include using memory-safe programming languages such as Rust or Go, configuring compilers to detect buffer overflow bugs before deployment, and conducting regular product testing. // Related StoriesCISA, along with other government agencies including the FBI and the NSA, are offering additional resources and reports to help companies mitigate buffer overflow vulnerabilities and other critical security threats.The agency also highlighted three broad "secure by design" principles developed in collaboration with 17 global cybersecurity organizations. These principles emphasize full accountability in the software development process, a "radical" commitment to transparency, and organizational structures designed to prioritize security.
    0 Commentaires ·0 Parts ·13 Vue
  • Breakthrough 3D NAND flash etching technique could turbocharge SSD production
    www.techspot.com
    Freezing edge technology: A new plasma-based etching process could lead to denser data storage in phones, cameras, and computers. Researchers have developed a hydrogen fluoride plasma technique that doubles the etching rate in the manufacturing process of 3D NAND flash memory chips. Standard NAND flash storage is used in microSD cards, USB drives, and solid-state drives in computers and phones. To fit more gigabytes into smaller spaces, manufacturers have begun stacking memory cells vertically in a process called 3D NAND.Advancements in 3D NAND have pushed chip designs beyond 200 layers, with companies like Micron, SK Hynix, and Samsung already eyeing 400-layer technology to increase storage density. However, higher layer counts also bring greater manufacturing complexity. One particularly demanding process is etching, which requires meticulously carving precise holes, layer by layer, through alternating silicon oxide and silicon nitride layers.Researchers from Lam Research, the University of Colorado Boulder, and the Princeton Plasma Physics Laboratory (PPPL) have developed a new technique to streamline the process. It uses cryogenic (low-temperature) hydrogen fluoride plasma to etch the holes. In experiments, the etch rate more than doubled, increasing from 310 nanometers per minute with the old method to 640 nm/min with their approach. They also found that the etched holes were cleaner.Seeing benefits, the researchers experimented with adding a few other ingredients to the hydrogen fluoride plasma recipe. Phosphorus trifluoride acted as a nitrous boost for silicon dioxide etching, quadrupling the rate. They also tested ammonium fluorosilicate. The team detailed its findings in a study published in the Journal of Vacuum Science & Technology.Seeing benefits, the researchers explored adding a few other ingredients to this hydrogen fluoride plasma recipe. Phosphorus trifluoride acted as a nitrous boost for the silicon dioxide etching, quadrupling that rate. They also tested out ammonium fluorosilicate. The full findings can be found in the study published in the Journal of Vacuum Science & Technology. // Related StoriesWhile some challenges remain, the new technique could overcome a significant manufacturing hurdle. Igor Kaganovich, a principal research physicist at PPPL, pointed out that increasing memory density will be crucial as data demands grow with AI adoption.It's too early to say if this will result in cheaper or denser NAND chips for consumers. The technique still needs to be proven commercially viable and scaled for mass production. Even if manufacturers adopt the process, there's no guarantee that any cost savings will trickle down to consumers.
    0 Commentaires ·0 Parts ·11 Vue
  • ESA to assess Asteroid 2024 YR4's threat level using James Webb Telescope
    www.techspot.com
    In brief: The James Webb Space Telescope has been "recruited" to take a closer look at asteroid 2024 YR4. The giant space object, composed of rock, dust, and possibly other materials, now has a 1-in-48 chance of impacting Earth by December 2032, but we still have time to get ready and brace for impact. Discovered on December 27, 2024, asteroid 2024 YR4 is currently rated as a three on the Torino scale. Space agencies around the world are closely monitoring the situation, with the European Space Agency set to use the James Webb Space Telescope to provide a more precise risk assessment.Researchers at the ESA are updating the most relevant data about asteroid 2024 YR4 on a daily basis. Today's assessment confirms that the asteroid has a diameter between 40 and 90 meters, and a two percent probability of impacting Earth on December 22, 2032. Currently, astronomers studying asteroid 2024 YR4 are limited to using instruments that detect visible light reflected from the Sun.As a general rule, the brighter the asteroid, the larger it is. However, things can get complicated if the asteroid has a highly reflective surface. It could either be 40 meters across and very reflective, or 90 meters across and much less reflective.A precise estimation of the asteroid's size will be crucial for properly assessing the threat, as a 90-meter, high-speed body could cause significantly more damage than a 40-meter one.Webb will be particularly useful for studying asteroid 2024 YR4. The orbiting observatory operates in the infrared portion of the electromagnetic spectrum, which allows for more accurate estimates of the asteroid's size based on the heat it emits. // Related StoriesESA scientists recently published a paper highlighting the telescope's ability to detect very small bodies (less than 10 meters across) within the asteroid belt, which lies between the orbits of Jupiter and Mars.Astronomers will rely on two specific JWST instruments: the Mid-InfraRed Instrument (MIRI) and the Near-Infrared Camera (NIRCam). By combining data from MIRI and NIRCam, ESA scientists hope to obtain more precise measurements of the asteroid's size and position. The NIRCam will be particularly useful for tracking the asteroid's position when it is out of reach of Earth-based telescopes.The European Space Agency plans to conduct three separate observation campaigns using the JWST. The first will take place in March, when asteroid 2024 YR4 will be at its brightest and within the telescope's range. The second round is scheduled for May, to track changes in the asteroid's temperature. The final round of observations will occur in 2028, aimed at studying the asteroid's orbit around the Sun.
    0 Commentaires ·0 Parts ·10 Vue
  • PlayStation 5 sales top 75 million, keeping pace with PS4 and leaving Xbox in the dust
    www.techspot.com
    Editor's take: Sony tightened its grip on this generation's console war, with PlayStation 5 sales soaring to over 75 million. Meanwhile, Xbox Series X|S sales remain somewhat stagnant at 28.3 million units. Substantial gaming revenue helped prop up weaker performances in Sony's film and financial services divisions, keeping the company solid. Sony's third-quarter earnings for fiscal year 2024 look great, largely thanks to the PlayStation 5's continued success. The PS5 had its best-ever holiday period in 2024, shipping 9.5 million units only a hair behind PlayStation 4's best quarter in FY2017. With total shipments topping 75 million, the PS5 is closing in on the PS4's 76.5 million units set at the same stage of its lifecycle. Sony has shipped 15.7 million PS5 consoles so far this fiscal year and aims to hit 18 million by the end of March. This surge in hardware sales has driven the company's growth across the board.In fact, Sony posted a solid 18-percent jump in revenue, hitting 4.41 trillion ($28.97 billion US), with operating income creeping up by a percentage point, totaling 469.3 billion ($3.08 billion). As expected, the gaming division is leading the charge, with PlayStation 5 sales continuing to push the momentum, along with boosts from software and PlayStation Network subscriptions. The positive results caused an early morning boost to company shares, rising from $21.97 to $23.91 an 8.8-percent spike.Even though the hardware market has been sluggish, the PlayStation 5 is still going strong. Demand is steady, and Sony has blown past expectations with third-party software sales. The PlayStation Network also beat expectations, hitting 129 million monthly active users up 10 percent from last year. PlayStation Plus growth helped to keep recurring revenue from services solid. The shift to live services and digital games is also helping Sony maintain the lead in the gaming sector.However, it's not all about gaming. Sony Music is doing well, with streaming revenues climbing and some major albums making waves. Conversely, Sony Pictures took a hit. The Hollywood strikes in 2023 caused production delays and fewer big releases in FY2024, so revenue dropped. Thankfully, streaming and licensing deals helped ease the pain.Sony's Imaging & Sensing Solutions segment is holding steady. With smartphone cameras getting more advanced and AI-powered sensors becoming the norm, Sony is still a major player in smartphone supply chains. Unfortunately, its Financial Services division didn't do as well, with a dip in operating income thanks to some of the market turbulence we've seen lately. // Related StoriesLooking ahead, Sony is optimistic. It has raised its forecast for operating profits to 1.34 trillion ($8.7 billion) for the fiscal year ending in March 2025. Sony is driving momentum forward with the PlayStation ecosystem in full swing and a lineup of games set to drop soon. Furthermore, the company appears ready to keep pushing into live-service games, cloud gaming, and acquisitions to stay ahead of the curve. If everything goes according to plan, PlayStation will keep running the show through at least this generation.
    0 Commentaires ·0 Parts ·10 Vue
  • Europe enters the AI race with $207 billion InvestAI initiative
    www.techspot.com
    Editor's take: The new US administration recently announced an unprecedented investment to build the world's largest AI infrastructure project. Meanwhile, China shocked Wall Street and Fortune 500 companies by unveiling the controversial DeepSeek chatbot. Now, Europe is joining the race to burst the AI bubble with its aptly named InvestAI plan. Europe has its own "Stargate" program aimed at developing significantly more powerful AI capabilities in the coming years. At the AI Action Summit in Paris, European Commission President Ursula von der Leyen officially unveiled the InvestAI initiative. European authorities plan to allocate or rather "mobilize" up to 200 billion ($207 billion) in an unprecedented funding effort to develop EU-focused AI technologies and machine learning models.Like many global leaders today, von der Leyen expressed enthusiasm for AI and its potential to revolutionize nearly every sector. She emphasized that AI-driven services will enhance healthcare, accelerate scientific research and innovation, and strengthen Europe's global competitiveness. According to von der Leyen, Europe will contribute to this technological frontier with an approach centered on openness, collaboration, and a deep pool of research talent.Von der Leyen also highlighted that InvestAI will leverage the same public-private partnership model that created CERN, the birthplace of the World Wide Web. She shares her optimism with European Investment Bank President Nadia Calvio, who stated that AI will play a crucial role in driving innovation and productivity across Europe.The European Commission plans to build several new "AI gigafactories" across Europe massive data centers dedicated to AI training and inference. Each facility will house approximately 100,000 "latest-generation" AI accelerators, nearly four times the number of AI chips used in current AI factories under construction.Previously, European authorities announced a $10 billion plan to build seven AI factories, with five additional plants set to be unveiled soon. InvestAI will employ a multi-layered funding model, with contributions from institutional partners, member states, and existing EU funding programs. Additionally, Brussels is supporting AI-driven innovation across sectors such as robotics, healthcare, biotech, and climate technology through the GenAI4EU initiative. // Related StoriesEurope's InvestAI plan follows Donald Trump's announcement of the $500 billion "Stargate" initiative, aimed at building a next-generation AI infrastructure in the US. Meanwhile, China is advancing its own AI ambitions with the DeepSeek model, which some speculate could drastically reduce AI development costs or perhaps not.Von der Leyen emphasized that the race to develop bigger and more powerful AI systems is "far from over," and the EU is determined to accelerate its progress to remain competitive on the global stage.
    0 Commentaires ·0 Parts ·8 Vue
  • Honda and Nissan scrap merger plans as talks break down
    www.techspot.com
    In a nutshell: Honda and Nissan have agreed to end talks of a potential merger that would have led to the creation of the world's third-largest automotive group behind Volkswagen and Toyota, in that order. Unsurprisingly, the two sides couldn't come to terms on the structure of the combined company. Honda and Nissan took seats at the negotiating table in December 2024 to try and hammer out the details of what would have been a roughly $60 billion deal. Nissan's fortunes started taking a turn in early 2018 and many believed a merger between the two Japanese auto giants would have greatly improved their chances at mounting an offensive in the budding EV market.Several strategies were floated during negotiations. At one point, Honda proposed changing the structure of the deal going from a joint holding company where it would appoint the majority of directors and executives, to a plan in which Nissan would be a subsidiary of parent company Honda.Reuters noted several additional factors that could have also factored into the inability to get a deal done, including Nissan's pride and being in denial of its true position in the market. Nissan also has ties with Mitsubishi and Renault, which could have further complicated matters. Honda's subsidiary plan also reportedly rubbed some Nissan executives the wrong way, sources said.Ultimately, both sides agreed that in order to prioritize speedy decision making in the era of electrification, it would be best to cease discussions and terminate the memorandum of understanding signed back in December.Shares in Nissan are up just over four percent on the day as of writing while Honda stock is up close to two percent.Nissan, meanwhile, announced immediate measures to try and turn things around through a restructuring aimed at reducing costs by roughly 400 billion yen in fiscal year 2026. Honda recently opened pre-orders for its Afeela EV in collaboration with Sony, and announced plans to bring back the beloved Prelude as a hybrid.Image credit: TopSphere Media, John Cameron
    0 Commentaires ·0 Parts ·10 Vue
  • www.techspot.com
    In a nutshell: A recent blog post by software engineer Paul Butler has shed light on a novel technique for concealing data within Unicode characters, specifically emojis. The post explains the concept and its potential for misuse and provides a tool to experiment with this method. The concept revolves around Unicode's system of representing text as a sequence of codepoints, with each codepoint being a number assigned meaning by the Unicode Consortium. While most users are familiar with the one-to-one mapping between codepoints and visible characters in Latin-based scripts, the situation becomes more intricate with other writing systems where multiple codepoints may represent a single on-screen character.The key to this data-encoding method lies in Unicode's "variation selectors." These 256 special codepoints labeled VS-1 through VS-256 have no visible representation but can modify the presentation of the preceding character. Most Unicode characters don't have associated variations, but the Unicode standard mandates that these selectors be preserved during text transformations, even if their meaning is unknown to the processing software.This preservation characteristic opens the door to a clever encoding scheme. Since 256 variations can represent a single byte of data, it becomes possible to "hide" one byte within any Unicode codepoint. Taking this concept further, by concatenating multiple variation selectors, one can represent any arbitrary byte string, effectively encoding unlimited data within a single character.While this discovery presents fascinating possibilities, it raises serious concerns about misuse. Hackers could exploit this method to bypass human content filters. Since the encoded data becomes invisible once rendered, moderators wouldn't detect its presence, allowing malicious actors to slip harmful or prohibited content past moderation systems. // Related StoriesThe technique also has the potential to watermark information. Encoding data in variation selectors allows the originator to mark identical messages for different recipients. If leaked, the sender could trace the text back to a specific recipient, raising serious privacy and whistleblower protection concerns.Butler also explored the impact of this encoding method on language models (LLMs). Initial findings suggest that while tokenizers generally preserve variation selectors as tokens, most models seem reluctant to decode them internally. However, when paired with a code interpreter, some advanced models have demonstrated the capability to solve these hidden data puzzles.Butler also created an encoder/decoder that allows users to hide arbitrary data within Unicode characters, particularly emojis. Users can input text, which the tool encodes into any Unicode character, including emojis. The resulting character appears normal to the eye but contains hidden data that anyone can extract with Butler's tool. He posted the encoder online if you'd like to experiment with it.
    0 Commentaires ·0 Parts ·23 Vue
  • AI adoption is increasing, with 3 in 5 Americans saying it improves their lives
    www.techspot.com
    In a nutshell: Artificial intelligence, and not just the generative kind, has never been more prevalent than it is today. But many people's views of the technology range from skepticism to outright hostility. However, a recent survey shows that 3 in 5 US participants believe AI has improved their lives. It also revealed the states that rely most and least on AI. The Listening App, which specializes in using AI to turn text into speech, surveyed Americans in various large cities across the country to learn more about their usage of AI.Questions covered the frequencies of respondents' AI use, which tools they used, the purposes they have for these tools, and more. The answers were then used to create a reliance score from 0 to 100, with 100 representing populations most reliant on AI tools.Some of the main takeaways include 60% of participants using AI tools or apps at least once a week. One in two have used AI to support their work, and nearly two thirds say their use of AI tools has increased in the past year.Probably the two biggest findings are that 1 in 6 people said they have become dependent on AI in some way, while 3 in 5 said it has improved the quality of their daily life.With a reliance score of 99.4, Oregon is the state with the highest reliance score, followed by Florida (98.6) and Arizona (94.6). The state least reliant on AI was Missouri (70.8), followed by Mississippi (73.5), and Rhode Island (75.1).While the survey covers all types of AI, it was ChatGPT that proved the most widely used tool, with 77.9% saying they had used it. Google Translate was second with 44.8%, followed by Gemini (33.2%), Canva (28.5%), Grammarly (25.3%), and Copilot (22.2%). // Related StoriesAs for their purposes, most people (62.7%) use AI for writing and editing. Online searching was a close second at 61.4%, followed by summarizing text (42.7%), brainstorming (39%), and generative art (32%).There are some important caveats to remember here: the survey was carried out by an AI company, and it doesn't reveal how many people took part in the study. Still, the technology's role in society, for better and for worse, cannot be overstated.
    0 Commentaires ·0 Parts ·32 Vue
  • Breakthrough brings fiber optics to quantum computing, improving efficiency and reducing heat generation
    www.techspot.com
    In context: Quantum computers are all about qubits, the basic units that operate according to the principles of quantum mechanics instead of the zeros and ones of today's computers. They promise incredible calculation speeds for certain problems, but they're extremely finicky. The slightest bit of heat or electromagnetic disturbance can disrupt their delicate quantum states. Quantum computers run at temperatures just a hair above absolute zero. And keeping them humming along at these temps requires massive, multi-million dollar cooling systems known as dilution refrigerators.Researchers at the Institute of Science and Technology Austria (ISTA) have made a breakthrough that could significantly reduce the cost of these computers by removing one of the main sources of heat.Electrical signals in these computers travel through wires, which generate heat due to resistance. With millions of signals constantly pinging the qubits, that heat builds up fast, forcing bigger and pricier cooling rigs.The researchers replaced these electrical connections with fiber optic cables, which can transmit signals using light instead of electricity. Fiber optics are effectively heat-free and have other advantages, such as higher bandwidth and less electromagnetic interference.However, there's a catch: qubits can't directly process optical signals. So the ISTA team used a clever electro-optical transducer to convert the optical signals into microwaves that the qubits can understand, and vice versa. // Related StoriesGeorg Arnold, co-lead author of the study published in Nature Physics, said that the new approach might allow them to increase the number of usable qubits so they become useful for real computation. He also stated that it sets the stage for networking multiple quantum computers over fiber optic links at room temperature. The technology removes a lot of performance-limiting electronics, too.That said, this new method is still just a prototype with lots of room for improvement. But it represents a critical first step toward quantum systems that don't require super-cooling every component. That could make them vastly more practical and affordable to build and operate at serious scales.Of course, a truly useful large-scale quantum computer is probably still a few decades away, but breakthroughs like these bring us a little closer.Masthead credit: ISTA
    0 Commentaires ·0 Parts ·48 Vue
  • www.techspot.com
    WTF?! Scarlett Johansson, who has previously spoken out against the misuse of AI, is now calling for the US government to pass legislation that protects people against this practice. It comes after the Black Widow actress and several other celebrities appeared in an AI-generated video that went viral. The video in question was created by an Instagram user who calls themselves a generative-AI expert in their bio. It is a response to Kanye West's Super Bowl ad, in which the rapper bought ad space during the game to increase traffic to his website. After the ad aired, West changed the site into a storefront that contained a single item: a white T-shirt with a swastika. Shopify removed the storefront soon after it went live.The response video features AI-generated versions of Johansson, Drake, Natalie Portman, Jerry Seinfeld, Steven Spielberg, Sam Altman, Mark Zuckerberg, Woody Allen, and several other Jewish celebrities. They are all wearing white T-shirts with a cartoon hand featuring a Star of David in the middle, showing the middle finger. The word "Kanye" is written underneath.View this post on InstagramA post shared by Ori Bejerano (@oribejerano_ai)Like most modern generative AI videos, this one has convinced many people that it's the real celebs taking part. It does have the usual signs that it's the work of an AI, such as the blurriness, occasional weird hands, and uncanny valley faces, but not everyone will notice them. The video ends with "Enough is Enough" and "Join the Fight Against Antisemitism."Johansson said in the statement that while she has no tolerance for antisemitism or hate speech of any kind, she does "firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one person who takes accountability for it. We must call out the misuse of AI, no matter its messaging, or we risk losing a hold on reality." // Related Stories"I have unfortunately been a very public victim of AI, but the truth is that the threat of AI affects each and every one of us," she added.Back in 2019, Johansson slammed deepfaked porn videos that superimposed her face onto adult actresses' bodies. The Avengers star appeared in many of these videos, including one that has been viewed over 1.5 million times."Nothing can stop someone from cutting and pasting my image or anyone else's onto a different body and making it look as eerily realistic as desired," she said. "The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause [...] The Internet is a vast wormhole of darkness that eats itself."Johansson also had a run-in with OpenAI last year when the voice for the GPT-4o model featured a voice assistant, Sky, that sounded a lot like her. Johansson said she was approached by OpenAI nine months earlier to voice Sky but said no. She added that she had been "forced to hire legal counsel" as a result of the similarities."There is a 1,000-foot wave coming regarding AI that several progressive countries, not including the United States, have responded to in a responsible manner. It is terrifying that the U.S. government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of AI," Johansson said in her recent statement."I urge the U.S. government to make the passing of legislation limiting AI use a top priority; it is a bipartisan issue that enormously affects the immediate future of humanity at large."
    0 Commentaires ·0 Parts ·51 Vue
  • Student turns a PDF into a functional Linux emulator
    www.techspot.com
    Recap: Early last month, someone used the PDF format's JavaScript support to run Tetris inside what should normally be a static text document. Predictably, within days, a high school student upgraded the hack to run Doom within a PDF file. The same developer has now enhanced the code to run the entire Linux operating system. Barely a month after unveiling a port of Doom running inside a PDF, high school student and programmer "Ading2210" has successfully emulated Linux within the popular file format. Although performance is limited, the project redefines what's possible with PDF JavaScript tools. Users can try it here using Chromium browsers like Chrome, Edge, and Opera. The source code is available on the developer's GitHub page.LinuxPDF runs in a RISC-V emulator based on TinyEMU. Its inner workings closely resemble those of Ading2210's DoomPDF. For example, the inputs repeat the trick pioneered by the earlier Tetris PDF hack, reusing the Doom port's code. Users can click on virtual keys below the main screen, but most will likely prefer direct keyboard controls, which work by interpreting inputs in a text field.Although the PDF format was primarily designed to display text and images, it can also run JavaScript code. Adobe Acrobat includes the entire JavaScript specification, enabling features like 3D rendering, monitor detection, and HTTP requests.PDFs running in browsers use a more limited version, but it's good enough to run games and operating systems. Ading2210 discovered that an old version of Emscripten that targets asm.js instead of WebAssembly can compile C code to run within the file format.Like DoomPDF, the Linux emulation suffers from slow performance. Booting the kernel takes up to a full minute about 100 times longer than a traditional Linux system. According to Ading2210, this cannot, unfortunately, be fixed because Chromium uses a version of V8 that doesn't support the JIT compiler.The file system is 32-bit by default. However, users can build a 64-bit version from the source code by cloning the repository within a real Linux system, editing the "BITS" line, and downloading Emscripten version 1.39.20. Sadly, running the 64-bit version doubles the performance deficit.Users interested in a more practical Linux application for low-end hardware can try Ading2210's ChromeOS RMA Shim Bootloader. The script collection allows a full Debian distro to run on a Chromebook without modifying the firmware. The project also supports enrolled enterprise devices. // Related Stories
    0 Commentaires ·0 Parts ·45 Vue
  • www.techspot.com
    A hot potato: Bobby Kotick, the former Activision Blizzard CEO who gamers crowned the most hated figure in the industry, has given a lengthy interview that's unlikely to improve his public image. Kotick calls the many harassment lawsuits against his ex-company "fake" and planned by a union to increase its membership. He also says the acquisition of Project Gotham Racing studio Bizarre Creations was a bad decision, labeled one CEO the worst in the industry, and slammed the Warcraft movie. Kotick made his comments during an interview on Kleiner Perkins' Grit podcast. Activision Blizzard faced several lawsuits and investigations after the California Department of Fair Employment and Housing (DFEH) sued the company over allegations of a toxic workplace culture, widespread sexual harassment, discrimination against women, and an environment described as having a "frat boy" culture.A Wall Street Journal report claimed Kotick was aware of the allegations "for years" but failed to do anything or even tell the board. In response, Activision Blizzard staff launched a petition demanding he step down.When asked about the lawsuits and petition, Kotick said, "That was fake.""I can tell you exactly what happened," Kotick continued. "The Communication Workers of America [CWA] union started looking at technology. They kept losing because they represented the News Guild, Comcast, and they realized they were losing members at a really dramatic rate, so they gotta figure out: how do they get new union members? So they first targeted a bunch of different businesses - Google, some other tech companies, Tesla and SpaceX, and us.""It's the power of unions," Kotick said. "I didn't really understand this until we went through this process. They were able to get a government agency, the EEOC [Equal Employment Opportunity Commission] and a state employment agency called the Department of Fair Employment and Housing [DFEH], to file fake lawsuits against us and Riot Games making allegations about the workplace that didn't... weren't true, but they were able to do this." // Related StoriesActivision paid $54 million in 2023 to settle a lawsuit brought by the California Civil Rights Department (CRD) over accusations of widespread gender and pay inequality. The sexual harassment and discrimination suit was settled for $18 million in 2022. Meanwhile, Riot Games paid $100 million in 2022 to settle a similar lawsuit.Activision also had to pay the Securities and Exchange Commission (SEC) $35 million in 2023 for failing to disclose workplace harassment issues to investors and violating whistleblower protection rules."They're [the CWA union] so clever," Kotick said. "They realized that would be a thing that they then could come into a company - because we pay well, we have great benefits, great working environment, and they could say, 'hey, the culture is bad', 'people are harassed' or 'they're retaliated against', or 'there's discrimination'"."They came up with this plan, hired a PR firm, and they started attacking our company. They got these two agencies to file these lawsuits to claim there was some sexual harassment... We didn't have any of that. Ultimately they had to admit that this was not truthful and withdraw the complaints."Kotick claims he fired people "on the spot" if he was made aware of inappropriate conduct in the workplace.The ABetterABK workers group, which supported many Activision Blizzard employees during the lawsuits, responded to Kotick's comments. It stated, "The executives of our company did not protect us and often made the situation worse or directly perpetuated the harm.""The trauma, discrimination, and abuse that our coworkers and former coworkers endured is not fake or a 'plan to drive union membership'," the group added. "Our unions were born from the very real and harmful way executives reacted when made aware of these situations."Elsewhere, Kotick said ex-Electronic Arts and Unity CEO John Riccitiello was the worst CEO the video game industry. An opinion he seems to be basing on EA's financial performance during Riccitiello tenure April 2007 to March 2013.Kotick also said that Activision's decision to buy Bizarre Creations, maker of Project Gotham Racing, for $67.4 million in 2007 was a bad one. The studio released Geometry Wars: Retro Evolved 2, Blur, and James Bond 007: Blood Stone after being acquired. Activision announced in 2010 that it was closing Bizarre Creations.Kotick not only failed to remember the name of the studio "that did the driving game for Xbox" he also got its location wrong, saying it was in Manchester instead of Liverpool. At least he got the right country.It also appears that Kotick is not a fan of the 2016 adaptation of Warcraft, calling it one of the worst movies he's ever seen. He said it was a distraction that impacted the development of the WoW game and one of the reasons veteran designer Chris Metzen left the company in 2016."Our expansions were late. You know, patches weren't getting done on time. And the movie was terr it was one of the worst movies I've ever seen."Warcraft made just $47 million in the US, but managed to generate $439 million worldwide, mostly thanks to its popularity in China, making it the highest-grossing film based on a video game at the time. That still wasn't enough to break even as its production, marketing, and distribution costs reached around $450 million to $500 million. Reviews were mixed, but it was certainly better than many other video game adaptions especially those made by Uwe Boll.
    0 Commentaires ·0 Parts ·45 Vue
  • How CPUs are Designed and Built: Fundamentals of Computer Architecture
    www.techspot.com
    We all think of the CPU as the "brains" of a computer, but what does that actually mean? What is going on inside with the billions of transistors that make your computer work? In this four-part series, we'll be focusing on computer hardware design, covering the ins and outs of what makes a computer function.The series will cover computer architecture, processor circuit design, VLSI (very-large-scale integration), chip fabrication, and future trends in computing. If you've always been interested in the details of how processors work on the inside, stick around this is what you need to know to get started.Part 2: CPU Design Process(schematics, transistors, logic gates, clocking) Part 3: Laying Out and Physically Building the Chip(VLSI and silicon fabrication)Part 4: Current Trends and Future Hot Topics in Computer Architecture (Sea of Accelerators, 3D integration, FPGAs, Near Memory Computing) What Does a CPU Actually Do?Let's start at a very high level with what a processor does and how the building blocks come together in a functioning design. This includes processor cores, the memory hierarchy, branch prediction, and more. First, we need a basic definition of what a CPU does.The simplest explanation is that a CPU follows a set of instructions to perform some operation on a set of inputs. For example, this could be reading a value from memory, adding it to another value, and finally storing the result back in memory at a different location. It could also be something more complex, like dividing two numbers if the result of the previous calculation was greater than zero.When you want to run a program like an operating system or a game, the program itself is a series of instructions for the CPU to execute. These instructions are loaded from memory, and on a simple processor, they are executed one by one until the program is finished. While software developers write their programs in high-level languages like C++ or Python, for example, the processor can't understand that. It only understands 1s and 0s, so we need a way to represent code in this format.The Basics of CPU InstructionsPrograms are compiled into a set of low-level instructions called assembly language as part of an Instruction Set Architecture (ISA). This is the set of instructions that the CPU is built to understand and execute. Some of the most common ISAs are x86, MIPS, ARM, RISC-V, and PowerPC. Just like the syntax for writing a function in C++ is different from a function that does the same thing in Python, each ISA has its own syntax.These ISAs can be broken up into two main categories: fixed-length and variable-length. The RISC-V ISA uses fixed-length instructions, which means a certain predefined number of bits in each instruction determines what type of instruction it is. This is different from x86, which uses variable-length instructions. In x86, instructions can be encoded in different ways and with different numbers of bits for different parts. Because of this complexity, the instruction decoder in x86 CPUs is typically the most complex part of the entire design.Fixed-length instructions allow for easier decoding due to their regular structure but limit the total number of instructions an ISA can support. While the common versions of the RISC-V architecture have about 100 instructions and are open-source, x86 is proprietary, and nobody really knows how many instructions exist. People generally believe there are a few thousand x86 instructions, but the exact number isn't public. Despite differences among the ISAs, they all carry essentially the same core functionality.Example of some of the RISC-V instructions. The opcode on the right is 7-bits and determines the type of instruction. Each instruction also contains bits for which registers to use and which functions to perform. This is how assembly instructions are broken down into binary for a CPU to understand.Now we are ready to turn our computer on and start running stuff. Execution of an instruction actually has several basic parts that are broken down through the many stages of a processor.Fetch, Decode, Execute: The CPU Execution CycleThe first step is to fetch the instruction from memory into the CPU to begin execution. In the second step, the instruction is decoded so the CPU can figure out what type of instruction it is. There are many types, including arithmetic instructions, branch instructions, and memory instructions. Once the CPU knows what type of instruction it is executing, the operands for the instruction are collected from memory or internal registers in the CPU. If you want to add number A to number B, you can't do the addition until you actually know the values of A and B. Most modern processors are 64-bit, which means that the size of each data value is 64 bits.64-bit refers to the width of a CPU register, data path, and/or memory address. For everyday users, that means how much information a computer can handle at a time, and it is best understood against its smaller architectural cousin, 32-bit. The 64-bit architecture can handle twice as much information at a time (64 bits versus 32).After the CPU has the operands for the instruction, it moves to the execute stage, where the operation is done on the input. This could be adding the numbers, performing a logical manipulation on the numbers, or just passing the numbers through without modifying them. After the result is calculated, memory may need to be accessed to store the result, or the CPU could just keep the value in one of its internal registers. After the result is stored, the CPU will update the state of various elements and move on to the next instruction.This description is, of course, a huge simplification, and most modern processors will break these few stages up into 20 or more smaller stages to improve efficiency. That means that although the processor will start and finish several instructions each cycle, it may take 20 or more cycles for any one instruction to complete from start to finish. This model is typically called a pipeline since it takes a while to fill the pipeline and for liquid to go fully through it, but once it's full, you get a constant output.Example of a 4-stage pipeline. The colored boxes represent instructions independent of each other.Image credit: WikipediaOut-of-Order Execution and Superscalar ArchitectureThe whole cycle that an instruction goes through is a very tightly choreographed process, but not all instructions may finish at the same time. For example, addition is very fast, while division or loading from memory may take hundreds of cycles. Rather than stalling the entire processor while one slow instruction finishes, most modern processors execute out-of-order.That means they will determine which instruction would be the most beneficial to execute at a given time and buffer other instructions that aren't ready. If the current instruction isn't ready yet, the processor may jump forward in the code to see if anything else is ready.In addition to out-of-order execution, typical modern processors employ what is called a superscalar architecture. This means that at any one time, the processor is executing many instructions at once in each stage of the pipeline. It may also be waiting on hundreds more to begin their execution. In order to execute many instructions at once, processors will have several copies of each pipeline stage inside.If a processor sees that two instructions are ready to be executed and there is no dependency between them, rather than wait for them to finish separately, it will execute them both at the same time. One common implementation of this is called Simultaneous Multithreading (SMT), also known as Hyper-Threading. Intel and AMD processors usually support two-way SMT, while IBM has developed chips that support up to eight-way SMT.To accomplish this carefully choreographed execution, a processor has many extra elements in addition to the basic core. There are hundreds of individual modules in a processor that each serve a specific purpose, but we'll just go over the basics. The two biggest and most beneficial are the caches and the branch predictor. Additional structures that we won't cover include things like reorder buffers, register alias tables, and reservation stations.Caches: Speeding Up Memory AccessThe purpose of caches can often be confusing since they store data just like RAM or an SSD. What sets caches apart, though, is their access latency and speed. Even though RAM is extremely fast, it is orders of magnitude too slow for a CPU. It may take hundreds of cycles for RAM to respond with data, and the processor would be stuck with nothing to do. If the data isn't in RAM, it can take tens of thousands of cycles for data on an SSD to be accessed. Without caches, our processors would grind to a halt.Processors typically have three levels of cache that form what is known as a memory hierarchy. The L1 cache is the smallest and fastest, the L2 is in the middle, and L3 is the largest and slowest of the caches. Above the caches in the hierarchy are small registers that store a single data value during computation. These registers are the fastest storage devices in your system by orders of magnitude. When a compiler transforms a high-level program into assembly language, it determines the best way to utilize these registers.When the CPU requests data from memory, it first checks to see if that data is already stored in the L1 cache. If it is, the data can be quickly accessed in just a few cycles. If it is not present, the CPU will check the L2 and subsequently search the L3 cache. The caches are implemented in a way that they are generally transparent to the core. The core will just ask for some data at a specified memory address, and whatever level in the hierarchy that has it will respond. As we move to subsequent stages in the memory hierarchy, the size and latency typically increase by orders of magnitude. At the end, if the CPU can't find the data it is looking for in any of the caches, only then will it go to the main memory (RAM).On a typical processor, each core will have two L1 caches: one for data and one for instructions. The L1 caches are typically around 100 kilobytes total, and size may vary depending on the chip and generation. There is also typically an L2 cache for each core, although it may be shared between two cores in some architectures. The L2 caches are usually a few hundred kilobytes. Finally, there is a single L3 cache that is shared between all the cores and is on the order of tens of megabytes.When a processor is executing code, the instructions and data values that it uses most often will get cached. This significantly speeds up execution since the processor does not have to constantly go to main memory for the data it needs. We will talk more about how these memory systems are actually implemented in the second and third installments of this series.Also of note, while the three-level cache hierarchy (L1, L2, L3) remains standard, modern CPUs (such as AMD's Ryzen 3D V-Cache) have started incorporating additional stacked cache layers which tend to boost performance in certain scenarios.Branch Prediction and Speculative ExecutionBesides caches, one of the other key building blocks of a modern processor is an accurate branch predictor. Branch instructions are similar to "if" statements for a processor. One set of instructions will execute if the condition is true, and another will execute if the condition is false. For example, you may want to compare two numbers, and if they are equal, execute one function, and if they are different, execute another function. These branch instructions are extremely common and can make up roughly 20% of all instructions in a program.On the surface, these branch instructions may not seem like an issue, but they can actually be very challenging for a processor to get right. Since at any one time, the CPU may be in the process of executing ten or twenty instructions at once, it is very important to know which instructions to execute. It may take 5 cycles to determine if the current instruction is a branch and another 10 cycles to determine if the condition is true. In that time, the processor may have started executing dozens of additional instructions without even knowing if those were the correct instructions to execute.To address this issue, all modern high-performance processors employ a technique called speculation. This means the processor keeps track of branch instructions and predicts whether a branch will be taken or not. If the prediction is correct, the processor has already started executing subsequent instructions, resulting in a performance gain. If the prediction is incorrect, the processor halts execution, discards all incorrectly executed instructions, and restarts from the correct point.These branch predictors are among the earliest forms of machine learning, as they adapt to branch behavior over time. If a predictor makes too many incorrect guesses, it adjusts to improve accuracy. Decades of research into branch prediction techniques have led to accuracies exceeding 90% in modern processors.While speculation significantly improves performance by allowing the processor to execute ready instructions instead of waiting on stalled ones, it also introduces security vulnerabilities. The now-infamous Spectre attack exploits speculative execution bugs in branch prediction. Attackers can use specially crafted code to trick the processor into speculatively executing instructions that leak sensitive memory data. As a result, some aspects of speculation had to be redesigned to prevent data leaks, leading to a slight drop in performance.The architecture of modern processors has advanced dramatically over the past few decades. Innovations and clever design have resulted in more performance and a better utilization of the underlying hardware. However, CPU manufacturers are highly secretive about the specific technologies inside their processors, so it's impossible to know exactly what goes on inside. That being said, the fundamental principles of how processors work remain consistent across all designs. Intel may add their secret sauce to boost cache hit rates or AMD may add an advanced branch predictor, but they both accomplish the same task.Part 2: CPU Design Process(schematics, transistors, logic gates, clocking) Part 3: Laying Out and Physically Building the Chip(VLSI and silicon fabrication)Part 4: Current Trends and Future Hot Topics in Computer Architecture (Sea of Accelerators, 3D integration, FPGAs, Near Memory Computing) This overview and first part of the series covers most of the basics of how processors work. In the second part, we'll discuss how the components that go into a CPU are designed, covering logic gates, clocking, power management, circuit schematics, and more.
    0 Commentaires ·0 Parts ·46 Vue
  • Diablo creator says current ARPGs focus too much on quick leveling, cheapening the experience
    www.techspot.com
    Editor's take: Action RPGs are a staple in my game collection. Surprisingly, I didn't become a fan of the genre until I played The Revenant, a 1999 ARPG released two years after Diablo. Since then, I've played every ARPG I could get my hands on, including the entire Diablo series. One of the best Diablo games if not one of the greatest ARPGs ever is Diablo 2. It nails many aspects, but progression stands out the most. The game walks a fine line between excessive grinding and leveling so fast that you max out a character in a day.Diablo creator David Brevik shared a similar sentiment in a recent interview with Video Gamer. He noted that Diablo 2 remains a "great" looter nearly 25 years later, largely because of its pacing."The pacing on Diablo 2, I think, is great," Brevik said.He believes many modern ARPGs prioritize rapid progression over natural pacing, a trend that has become common in the industry but ultimately devalues the experience."I think that RPGs, in general, have started to lean into this: kill swathes of enemies all over the place extremely quickly," said Brevik. "Your build is killing all sorts of stuff so you could get more drops, you can level up, and the screen is littered with stuff you don't care about."The approach that Brevik describes is a major feature in Diablo 3, and Diablo 4 doubles down on the concept. Blizzard intentionally designed both games to rush players toward Paragon levels, allowing characters to reach the maximum level in about a day. However, reaching levels beyond that requires fighting larger mobs, as Paragon leveling becomes a slog. This design pushes players to purchase the Battle Pass. While not Diablo 4's only flaw, it ranks high among the complaints from the franchise's creator. // Related Stories"I just don't find killing screen-fulls of things instantly and mowing stuff down and walking around the level and killing everything, very enticing. When you're shortening that journey and making it kind of ridiculous, you've cheapened the entire experience, in my opinion." Blevik opined. "I just don't feel like that is a cool experience. I find it kind of silly."He believes MMOs are just as guilty of this. There's heavy pressure to rush through the early levels, partly due to the rise of live service models. Games like Destiny 2 and Diablo 4 push players to blitz through the campaign to access seasonal content and the rewards it offers. There's no time to stop and enjoy the journey because the season "ends soon." This sense of urgency and rushed pace is exactly where publishers want players, as seasonal content fuels microtransactions.Blevik, who now heads up indie publisher Skystone Games, despises this design philosophy and steers clear of it. He favors game designs like Diablo 2, Torchlight, and The Witcher 3, which slow the pacing and allow players to savor the adventure."[The fun] actually isn't getting to the end; it's the journey," he said. "When you're shortening that journey and making it kind of ridiculous, you've cheapened the entire experience, in my opinion."I couldn't agree more. With the increasing emphasis on live service models and the flood of multiplayer games, finding an ARPG that nails the pacing is becoming more challenging. The single-player experience, however, remains the best for maintaining solid pacing.
    0 Commentaires ·0 Parts ·69 Vue
  • Ryzen 9800X3D burns itself and motherboard while user was watching TV show
    www.techspot.com
    WTF?! A second Ryzen 7 9800X3D failure has surfaced, with the motherboard sustaining severe thermal damage after about two weeks of use. Oddly, the user didn't overclock the chip or encounter any installation issues - just a sudden, random failure while watching videos. The victim, a Redditor named "t0pli," is a PC builder with two decades of experience. In his post, he explained that he built a brand-new system about 20 days ago using the 9800X3D CPU and an ASRock X870 motherboard. It ran smoothly without overclocking or high temperatures. Then, the system shut down out of nowhere while t0pli was watching shows. Upon inspection, the 9800X3D chip and ASRock motherboard showed severe thermal damage (masthead).The crazy part is that t0pli says he didn't use any overclocking tricks or push the hardware excessively. Besides enabling AMD's EXPO memory profiles, the rig was idling with stock settings. HWMonitor confirmed that temperatures appeared normal before the failure, too.This incident isn't the first time a user has reported a failed 9800X3D. In November, another user had their $479 chip unexpectedly burn up, taking the motherboard with it. That instance was attributed to user error, as the builder admitted to likely installing the CPU improperly, causing a short.However, in t0pli's case, the cause is unclear. Before running the system, he updated it to the latest available BIOS, so it appears to be just rotten luck. Worse still, t0pli bought the motherboard and CPU from different retailers, which could complicate warranty coverage.So far, AMD hasn't officially commented on these chip failures. However, while a spontaneous processor burn is concerning, the issues remain isolated, considering the thousands of chips AMD has sold. As highlighted in our recent review, the 9800X3D remains the most powerful gaming CPU, offering unmatched performance for high-end rigs. // Related StoriesOf course, AMD isn't the only chipmaker facing flagship processor issues. Last year, Intel experienced similar problems with its Raptor Lake CPUs, which were susceptible to permanent damage. Numerous reports revealed that the processors were receiving excessive voltage.
    0 Commentaires ·0 Parts ·74 Vue
  • Earth's inner core has shifted shape over two decades, scientists discover
    www.techspot.com
    Planetary Potato: Earth has a complex internal structure consisting of a solid iron-nickel core at the center and a liquid outer core. The interaction between these two elements can significantly impact the planet's solid, outer crust, and unusual events have been occurring within it for years. Earth's inner core has changed shape over the past 20 years. A team of scientists studied the core's behavior to understand why its rotation slowed down relative to Earth's in 2010, and discovered new evidence of the mysterious, shapeshifting nature of the planet's center.Earth's core is thought to be a solid sphere spinning independently from the liquid outer core and the rest of the planet. This rotation generates the magnetic field that protects Earth from solar winds and harmful solar radiation, acting as a shield for life on the crust. Without this spinning core, we wouldn't be here today.The researchers analyzed a series of seismic waves produced by 121 repeating earthquakes recorded in the same region of North America, between 1991 and 2023. Earthquake shockwaves are essentially the only way we can study what's happening deep beneath the planet's surface, as no one has yet been able to dig 4,000 miles to directly probe the hellish conditions of the core.By studying the repeated earthquakes, Professor John Vidale and his colleagues confirmed that the core actually slowed down around 2010. They also made a second, serendipitous discovery: Earth's core was changing shape in different locations. This phenomenon occurred at the boundary between the inner core and the liquid outer core, where the solid core approaches its melting point.The material flowing between the inner and outer cores could disrupt Earth's gravity field, causing the apparent deformation of the solid core. Hrvoje Tkalcic, a professor from the Australian National University, noted that this is an intriguing concept deserving further research in the future. // Related StoriesUnderstanding what is happening in our planet's hidden, chaotic interior will undoubtedly affect the future of our world in unknown and unpredictable ways. When Earth's core eventually stops, its magnetic field will cease to exist, and life will likely go extinct. While this apocalyptic event could happen billions of years from now, Earth will likely be consumed by our Sun's supernova before that occurs.
    0 Commentaires ·0 Parts ·73 Vue
  • Scientists sound alarm on rising odds of space junk striking airplanes
    www.techspot.com
    Death from above: A new study warns that the risk of airplanes being struck by falling space debris is increasing. While the chances remain low and no such incident has occurred yet the potential consequences could be catastrophic. Researchers at the University of British Columbia analyzed global air traffic patterns against the projected re-entry paths of uncontrolled space debris.Near major airport hubs, they estimate a 0.8 percent annual probability of a re-entry event posing a threat. While that may seem low, in heavily trafficked airspaces like the Northeastern US or Northern Europe, the risk jumps to over 26 percent per year.This growing hazard stems from the increasing volume of objects launched into orbit, ranging from traditional satellites to massive constellations like Starlink and discarded rocket stages. As those numbers multiply, so do the chances of an aerial collision as the clutter eventually rains back down.Scientists have long warned about the risks posed by satellite constellations. Beyond creating streaks that interfere with astronomical observations, these satellites can disrupt radio signals and may even contribute to ozone depletion when they burn up upon re-entry.While we can sometimes predict re-entry events, the margin for error remains slim. Experts caution that even a 1-gram fragment striking a plane's windshield or engine could cause severe damage. Because these predictions are so imprecise, air traffic controllers often shut down large sections of airspace as a precaution, leading to widespread flight disruptions. // Related StoriesThe researchers emphasize the need for stricter measures to ensure satellites and rockets undergo controlled re-entry, ideally disintegrating over remote ocean regions. Currently, more than 2,300 large rocket bodies remain in orbit, most destined for an uncontrolled and unpredictable descent in the coming decades. Without improved deorbiting practices, airspace closures will likely become more frequent.The full study can be found in the journal Scientific Reports.
    0 Commentaires ·0 Parts ·61 Vue
  • YouTube is the new television: More people now watch on TV than mobile
    www.techspot.com
    The big picture: As YouTube celebrates its 20th anniversary, CEO Neal Mohan has made a bold declaration: YouTube is not just a video platform, it's the new television. In his annual letter to the YouTube community, Mohan emphasized the platform's growing dominance in the living room, with TV screens now surpassing mobile devices as the primary way US audiences consume YouTube content. "TV screens have officially overtaken mobile as the 'primary device for YouTube viewing in the US,'" Mohan said, which is a major shift in viewing habits. This transition indicates that "YouTube is the new television," but one that looks drastically different from its predecessor. The "new" television is interactive, incorporating shorts, podcasts, and live streams alongside traditional content like sports, sitcoms, and talk shows.YouTube's growing presence in the TV market is undeniable. The platform has consistently topped Nielsen's monthly Gauge report, outpacing streaming giant Netflix in total viewership. This success is driven by several factors, including YouTube's ongoing investment in its YouTube TV virtual multichannel video programming distributor (vMVPD), which has already surpassed eight million subscribers.Furthermore, YouTube has been optimizing its TV app experience, introducing features designed to enhance the big-screen viewing. "We're bringing the best of YouTube to TVs, including a second-screen experience that lets you use your phone to interact with the video you're watching on TV for example, to leave a comment or make a purchase," Mohan noted.YouTube is also experimenting with "Watch With," a feature that allows creators to provide live commentary and reactions to games and events. Initial trials were conducted with the NFL.The platform's growth on connected TVs has also attracted new advertisers, introducing ad formats optimized for the big screen, such as QR codes and pause ads.Mohan also spoke about YouTube's role as a cultural epicenter, emphasizing the platform's significance during major events. He noted that, on Election Day last year, over 45 million Americans turned to YouTube for election-related content. Landmark videos, like Joe Rogan's interview with President Trump and political sketches from Saturday Night Live, further solidify YouTube's position as a source of information and entertainment. // Related Stories"From elections to the Olympics to Coachella to the Super Bowl and the Cricket World Cup, the world's biggest moments play out on YouTube," Mohan said.In addition to video content, YouTube has also emerged as a leading platform for podcasts. According to Edison Podcast Metrics, YouTube is now the most-used service for listening to podcasts in the US, surpassing Spotify and Apple Podcasts. "We've long invested in the podcast experience and creators have found that video makes this format even more compelling," Mohan explained, adding that YouTube plans to roll out more tools to support podcasters and improve monetization opportunities for creators.Looking ahead, Mohan underscored three "big bets for 2025," one of which is to bring more AI tools to creators. While generative AI models have garnered significant attention, Mohan said creators are finding simpler, more practical AI tools to be more beneficial for their everyday workflows."As impressive as the generative models are, creators tell us they're most excited about the ways AI can help with their bread-and-butter production," Mohan said.YouTube is investing in AI-powered tools to assist creators with tasks like generating video ideas, titles, and thumbnails. The platform is also using AI to help creators reach new audiences through auto-dubbing, which translates videos into multiple languages. According to YouTube, more than 40 percent of the total watch time for videos with dubbed audio comes from viewers choosing to listen in a dubbed language.Mohan also pointed to the rise of creators as "the startups of Hollywood," citing examples of YouTubers who have built professional-grade media operations. For instance, Dude Perfect recently opened a $5 million headquarters in Texas, and Alan Chikin Chow, creator of "Alan's World," opened a 10,000-square-foot studio in Burbank, California.
    0 Commentaires ·0 Parts ·59 Vue
  • www.techspot.com
    Rumor mill: Nvidia hasn't been more specific about the RTX 5070 and RTX 5070 Ti cards' launch dates beyond them arriving sometime in February. While the Ti version is still on track to land in the next few weeks, the vanilla card has reportedly been delayed until early March, and for several possible reasons. The claim comes from prolific leaker MEGAsizeGPU, who writes that the RTX 5070 "will be delayed. Instead of February, it will be on the shelf in early March."Leakers' claims should always be taken with a grain of salt, but there are plenty of reasons to believe this one. Firstly, the RTX 5070 Ti review and launch embargoes are set for February 19 and 20, respectively. But Nvidia has released no embargo details for the RTX 5070.The other issue is the supply nightmare that has been plaguing the RTX 5080 and RTX 5090 launch. The cards were sold out everywhere on launch day, with most retailers allocated units in low double-digit or even single-digit figures, leading to many labeling it a paper launch.If Nvidia does delay the RTX 5070 until March, one would hope the company will be able to build up more stock to supply to retailers. // Related StoriesThere's also AMD to consider. Team Red has confirmed that the mainstream RX 9070 series will also launch in early March.While Nvidia may be hoping to steal headlines from AMD by releasing the RTX 5070 around the same time as the RDNA 4 cards, there's always the slight chance it could adjust the pricing of the Blackwell GPU based on what AMD does, thereby making it more competitive.It was recently reported that AMD may price its RDNA 4 cards to undercut Nvidia's mid-range equivalents $599 MSRP for the RX 9070 XT, $150 less than the $749 RTX 5070 Ti, while the RX 9070 will likely be cheaper than the $549 RTX 5070.Nvidia has already announced the price of the RTX 5070, so a sudden change seems unlikely. However, the company did "unlaunch" the $899 RTX 4080 12GB in October 2022, later relaunching it as the RTX 4070 Ti, which was $100 cheaper.In other AMD news, the company is rumored to be developing a 32GB version of the upcoming RX 9070 XT, though it is designed with AI applications in mind.
    0 Commentaires ·0 Parts ·57 Vue
  • Big monitor brands are stockpiling displays as a buffer against Trump's tariffs
    www.techspot.com
    The big picture: The ongoing tit-for-tat tariff battle between the US and China could lead to a five percent increase in monitor prices for American buyers. That might not seem like much, but it marks a sharp reversal from the steady price declines of recent years driven by intense competition in the display market. According to Asian supply chain sources cited by DigiTimes Asia, rising tariffs on Chinese goods imported into the US are set to drive up monitor prices. In response, major brands are stockpiling inventory in an effort to keep price hikes as low as possible.This situation stems from the Trump administration's aggressive trade policies, which triggered retaliatory tariffs from China. These tariffs have put additional pressure on monitor manufacturers, whose margins were already razor-thin. As a result, major brands like Dell, HP, and Samsung are being forced to adopt more conservative shipment targets as rising costs disrupt their pricing strategies.However, the industry giants have a contingency plan: stockpiling. DigiTimes reports that these companies are rushing to import extra inventory potentially around 2 3 million units to buffer against the expected price increases.Moreover, second-tier brands that were already struggling to compete with the pricing and logistical power of industry giants could be facing even tougher challenges ahead.Historically, these smaller players have had little negotiating leverage when securing production capacity and maintaining profit margins. Now, with supply chains disrupted by the trade war and first-tier brands aggressively stockpiling, they are likely to face even higher costs that will be much harder for them to absorb. // Related StoriesA five increase may be the best-case scenario at this point. When the tariffs were first announced, the Consumer Technology Association warned that a worst-case escalation could send prices skyrocketing by 60-100 percent for some product categories. Fortunately, we haven't reached that level yet.However, experts caution that tariffs will lead to price hikes across the board. GPU costs have already been affected, and some manufacturers, such as ASRock, are even considering shifting production from China to Taiwan. Meanwhile, TSMC is reportedly planning to raise the prices of its most advanced semiconductor wafers by up to 15 percent this year.
    0 Commentaires ·0 Parts ·57 Vue
  • Adobe expands generative AI with Firefly video, launching today
    www.techspot.com
    The big picture: One of the most impressive applications of generative AI is its ability to create videos from nothing more than a simple text description. Type a few words into one of the many different video generation tools now available including Adobe's Firefly model and out pops what can be amazingly lifelike clips. It's a great example of how powerful this technology has become, as well as how quickly it's advancing. At the same time, many generative video tools also highlight questions and challenges around content ownership and copyright. Some of these have become a key part of larger discussions on the development and evolution of AI-powered tools.We've already started to see this play out with text generation tools that were trained on material scraped from across the internet much of it original commercial content. There are serious questions about whether and how content creators should be compensated for their work when it's incorporated into a large language model.Given the enormous amount of effort (and cost) that goes into creating videos, the voices of concern are bound to grow even louder as generative video usage becomes more widespread.In the graphics world, Adobe recognized these issues early on and made copyright protection a core part of its initial Firefly image generation tools. The company chose to use only content it had licensed and offered compensation to creators when integrating it into the model training process. Of course, it helped that Adobe had an enormous trove of content and direct connections to creators via its long-running Adobe Stock business, which offers millions of still images and videos for sale.Nevertheless, Adobe chose to follow these principles in leveraging that data and built a set of generative content tools that not only met general rules of fairness, but also provided a guarantee of commercial safety. In other words, anyone who used the Firefly tools was assured they would not face legal or financial challenges for using copyrighted content. Given that an important percentage of Adobe's customers are involved in creating commercial content, that has proven to be a significant advantage.Not surprisingly, as the company makes its latest Firefly video model publicly available today (in beta), it is following the same commercially safe principles and offering the same guarantees. In addition, Adobe is integrating support for Content Credentials with its AI-generated video, allowing people to reliably verify that it was created with AI an increasingly critical capability in a world seemingly overrun with deepfakes.Adobe is launching access to the new model today (it was first unveiled last fall) via both a new web application and through a "Generative Extend" feature in Adobe Premiere Pro. The company is also debuting two new Firefly plans and previewing one more. // Related StoriesFirefly Standard is priced at $9.99/month, offering 2,000 audio/video credits per month which allows users to create up to 20 five-second 1080p resolution videos per month. Firefly Pro increases the limit to 7,000 credits and up to 70 five-second videos for $29.99/month. Firefly Premium, arriving later this year for $199.99/month, is designed for creative professionals who, according to Adobe, "expect to generate new video content on a daily basis."Like other offerings, the Firefly video model supports both text-to-video and image-to-video generation, keyframes at the beginning and end of a clip, and the ability to translate and accurately lip-sync audio across 20 different languages.A key differentiator for Adobe users will be the seamless integration with other apps across the Adobe suite. For example, users can easily create workflows that move from a still image in Photoshop or a vector illustration in Illustrator into the Firefly Video model and integrate the output straight into Premiere. Adobe has also added a new Scene-to-Image tool, which can be used to create 3D elements for video whether in an animated or photorealistic style.It's clear that Adobe is focusing on the kinds of tools and capabilities that regular users of its products will appreciate. While many people have been experimenting with other generative video tools for fun, Adobe appears to be focused on delivering practical capabilities that make video creation and editing easier.The new Generative Extend feature in Premiere Pro is a great example of this. While it might only be needed to extend an existing scene by half a second or so, that can make a huge difference for professional editors trying to match existing music, audio, and video elements. Similarly, an early preview of the new Firefly web app user interface highlights key creative choices for aspects like camera angles and movement, helping the model generate more engaging and cinematic outputs.While it's fair to say that Adobe is playing a bit of catch-up in the rapidly evolving field of generative AI video and adding 4K support, which Adobe says will be coming later this year it's also clear that the company is applying its own unique approach to the challenge. For the creative professionals who rely on Adobe for their work, that's an important step.Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech
    0 Commentaires ·0 Parts ·57 Vue
  • www.techspot.com
    What just happened? Rarlab, the company behind the famed WinRAR file compression software, has teamed up with French clothing brand Tern to release a WinRAR-themed bag designed to resemble the app's iconic logo, which depicts three stacked books in magenta, green, and blue. The WinRAR X account announced the new merch with a hilarious post that acknowledged how most people never pay to use the software despite it being a product costing $29 per license. "What better way to support the software you've NEVER paid for than by buying a WinRAR bag?" the post asks.WinRAR offers a 40-day free trial before becoming paid software, but the developers do not strictly enforce the limited trial period, allowing users to access it indefinitely without paying. The unpaid version of WinRAR offers full functionality and unrestricted access even after the end of the trial period, so most users simply close the pop-up nag at every startup and take advantage of what is effectively an unlimited trial period.The WinRAR-themed bag is made of vegan leather and has an adjustable clip-on strap that allows it to be slung across the shoulder. It has a single, rigid compartment, like a loot box, and uses a magnetic closure system. It measures 21.4cm x 14cm x 7cm, and is priced at 115 (around $119). The company plans to start shipping in April.To showcase the spaciousness of the new bag, the team at Tern shoved as many as 805 Yu-Gi-Oh single-sleeved cards inside, and Rarlab has since confirmed that it can accommodate three cans of Diet Coke. It doesn't offer much space if storage capacity is your primary concern, but it's certainly good enough to hold the essentials, like your phone, keys, and maybe even a purse or wallet. // Related StoriesThe quirky new bag is not the first piece of licensed WinRAR merch to be sold on the official Tern website. The two companies have an existing partnership to market a series of WinRAR-themed clothing and accessories, including jackets, hoodies, pants, t-shirts, and caps. Prices start at 48 (around $50) for the graphic tees and go up to a cool 225 (around $233) for the varsity jacket.
    0 Commentaires ·0 Parts ·54 Vue
  • www.techspot.com
    What just happened? A divide emerged between the United States and Europe regarding the regulation of AI at the AI Action summit held in Paris this week. While approximately 60 countries, including China, India, and Germany, signed a declaration to ensure AI is "safe, secure, and trustworthy," the US and the UK notably withheld their support. Vice President JD Vance cautioned against "overly precautionary" regulations on AI, emphasizing the US commitment to maintaining its dominance in the technology. "The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips," Vance said before the assembled crowd of world leaders and tech executives. "America wants to partner with all of you...but to create that kind of trust, we need international regulatory regimes that foster the creation of AI technology rather than strangle it," he added.The summit declaration calls for "ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all."Although the commitments are non-binding, the US and UK had previously signed similar declarations at earlier AI summits. This shift signals a potentially more competitive approach to AI development under the new US administration. Vance's speech was "a 180-degree turnaround from what we saw with the Biden administration," Keegan McBride, a lecturer at the Oxford Internet Institute, told the Financial Times.The UK government released a brief statement saying it had not been able to sign the agreement due to concerns about national security and global governance. "We felt the declaration didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it," a government spokesperson told the BBC. Meanwhile, Downing Street insists its decision was not based on the US shift. "This is about our own national interest, ensuring the balance between opportunity and security," the spokesperson said.The US stance comes amid increasing competition with China in AI development, including chip manufacturing, foundational models, AI chatbots, and the energy required for supercomputers. The recent emergence of the cut-price AI model from the Chinese research lab DeepSeek, for example, caught Silicon Valley groups off guard. // Related StoriesAs for Europe, it is actively seeking to establish a stronger foothold in the AI industry, aiming to reduce reliance on the US and China. French President Emmanuel Macron hosted the two-day summit, where European leaders and companies unveiled approximately 200 billion euros in planned investments in data centers and computing clusters to support the region's AI endeavors. "We need these rules for AI to move forward," he said.Vance also cautioned countries against entering AI deals with "authoritarian regimes," a veiled reference to China. He warned that "partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure," citing CCTV and 5G as examples of "cheap tech...[was] heavily subsidized and exported by authoritarian regimes."Concerns were also raised by the US that Current AI, a foundation launched by France during the summit, could be used to funnel money to French-speaking countries.Frederike Kaltheuner, senior EU and global governance lead at the AI Now Institute, noted that following the launch of the powerful open models from DeepSeek, Europeans felt they had a chance to compete in AI. McBride said of Vance's speech: "[It] was like, 'Yeah, that's cute. But guess what? You know you're actually not the ones who are making the calls here. It's us."
    0 Commentaires ·0 Parts ·46 Vue
  • Ukrainian drone unit wants to recruit gamers but warns it's "not like Call of Duty"
    www.techspot.com
    The big picture: The conflict between Russia and Ukraine has escalated into a full-blown technological arms race. Both sides are pouring resources into developing cutting-edge military drones and counter-drone systems. But in this high-stakes battle, Ukraine has a new weapon: gamers. Rally drivers are often naturally great at racing simulation games like Dirt, and vice versa, so you'd think the same would apply to warfare. Indeed, members of Ukraine's elite Typhoon drone unit told Business Insider that they see gamers as potential recruits. However, they also acknowledged the challenges.Piloting a first-person-view (FPV) drone might seem straightforward on the surface. The headsets are like VR goggles, and the controllers are similar to those of a gaming console. There's even a video game called "Death From Above" that simulates the experience of a Ukrainian drone operator raining hellfire down on Russian forces.However, in the virtual world, you can just hit reset when things go sideways. In real drone warfare, the consequences are deadly."People think flying a military drone is like playing 'Call of Duty,' until they realize there's no restart option," one Typhoon operator told the publication.The Typhoon unit explains that preparing for a real drone mission is an intricate process of analyzing equipment, anticipated obstacles like jamming, real-time intelligence, and coordinating with commanders. Every flight involves multiple evasive maneuvers and constant adjustments for enemy countermeasures and threats. // Related StoriesAs for the Typhoon unit itself, it was officially formed just last year and plays a vital role in Ukraine's National Guard. One of its responsibilities is developing and applying specialized UAV expertise on the battlefield. It does this by combining engineers who can rapidly configure drones with pilots capable of executing complex missions in the chaos of combat.Despite the challenges mentioned above, the unit sees gamers as invaluable recruits for their lightning-fast reflexes and comfort with virtual environments."Gamers make great drone pilots because they are used to fast-moving situations on the screen, just like in real drone operations," Michael, the commander of the unit, stated. "They already have experience making quick decisions, reacting fast, and controlling complex systems, which are all important skills in combat."Ukraine's president Volodymyr Zelenskyy has been pushing for increased domestic drone production and deployment. Companies and volunteer groups have responded by kicking into high gear, mass-producing relatively cheap drones to fill the gap when Western-supplied artillery runs low.With more drones comes the need for more pilots, and that gap could potentially be filled by recruiting skilled gamers with their finely tuned reflexes gained from video games.Image credit: Typhoon
    0 Commentaires ·0 Parts ·48 Vue
  • AMD could be developing a 32GB RDNA 4 GPU - enterprise-only or RTX 5090 rival?
    www.techspot.com
    Rumor mill: Rumors have surfaced claiming that AMD is developing an RDNA 4 GPU with up to 32GB of VRAM. It sounds like an exciting prospect, potentially leading to a card that could compete with the monstrous RTX 5090. Unfortunately for gamers, it would likely be an enterprise product designed for data centers and professional applications. This rumor, like many others, comes from Chiphell, the Chinese tech-focused forum, so take it with a grain of salt. It claims that AMD will launch a high-end RDNA 4 GPU sometime in the first half of 2025. The amount of VRAM it will feature has yet to be decided, but it could be as high as 32GB the same amount as Nvidia's RTX 5090.While a rival to the RTX 5090 from Team Red sounds like something the industry and gamers would welcome, it seems almost certain that the rumored GPU will be used as an enterprise product, which requires extra memory. The Radeon Pro W7900, for example, comes with 48GB of GDDR6, while the Instinct MI300X has 192GB of HBM3.The other thing to remember is that AMD said last year that it would not be competing with Nvidia when it comes to the current generation of flagship gaming cards. It confirmed that releasing cards in the Radeon RX 9000 series to match Nvidia's top RTX 5000 GPU won't be a priority. Instead, Team Red will be focusing on its mid-range and lower-end products to increase the company's overall market share.AMD unveiled the Radeon RX 9070 XT and Radeon RX 9070 GPUs at CES last month. The company did not reveal any details beyond stating that the RDNA 4 architecture is built on TSMC's 4nm node while featuring optimized compute units, improved ray tracing per CU, 'supercharged' AI compute, and better media encoding quality.There were reports only a few hours ago that claimed AMD is set to seriously undercut Nvidia's mid-range cards with its Radeon RX 9000 series. The RX 9070 XT could arrive with a $599 MSRP, $150 less than the $749 RTX 5070 Ti, while the RX 9070 is expected to be priced below the $549 RTX 5070. // Related Stories
    0 Commentaires ·0 Parts ·56 Vue
  • AMD may price the Radeon RX 9070 series to undercut Nvidia's mid-range GPUs
    www.techspot.com
    Something to look forward to: AMD's naming scheme for the upcoming Radeon RX 9070 series along with the company's various comments and statements would appear to confirm that these graphics cards will be squarely aimed at the mid-range market. However, as we await for final specifications and pricing, new rumors suggest that Team Red may be ready to aggressively undercut Nvidia's competing GeForce RTX 50 offerings. According to IT Home, AMD aims to set highly competitive prices for the upcoming Radeon RX 9070 and 9070 XT, potentially accelerating the retirement of the RX 7800 XT. Depending on performance benchmarks, a sub-$600 price tag could signal serious competition against Nvidia's upcoming RTX 5070 and 5070 Ti.Currently, AMD's best GPU in this price range is the Radeon RX 7800 XT, which launched in 2023 at $499. Reports indicate that the company initially planned to cease production in the third quarter of this year but may have decided to move up that timeline to January in order to shift focus to the RX 9000 series more quickly.The Radeon RX 9070 XT is expected to be based on the Navi 48 GPU, rumored to feature 4,096 cores, a 2.97GHz boost clock, 16GB of GDDR6 VRAM with a 256-bit bus, and 640GB/s of memory bandwidth. The standard RX 9070 is also anticipated to include 16GB of VRAM.Reports suggest that the RX 9070 XT could debut at a $599 MSRP, undercutting Nvidia's $749 RTX 5070 Ti by $150. If these rumors hold, the RX 9070 will likely be slightly cheaper than the $549 RTX 5070.However, AMD's competitiveness will entirely depend on real-world performance results, and those are currently only available in vague snapshots. IGN reported impressive rasterization performance from the RX 9070 in Call of Duty: Black Ops 6, while Hardware Unboxed highlighted substantial improvements in upscaling image quality. // Related StoriesComprehensive testing is still needed to determine whether the RTX 50 series' transition to GDDR7 VRAM offering increased memory speed and bandwidth will provide a meaningful advantage over the Radeon's GDDR6. AMD's claims of dramatic ray tracing performance improvements also remain unverified.Historically, AMD's GPUs have lagged behind Nvidia's in hardware-accelerated ray tracing. However, numerous recent titles such as God of War Ragnark, Stalker 2, and Kingdom Come: Deliverance II run well without it. That said, ray tracing bound games are becoming more prevalent, including Indiana Jones and the Great Circle, Star Wars Outlaws, Assassin's Creed Shadows, and Doom: The Dark Ages.Nvidia is expected to launch the RTX 5070 Ti on February 20, while AMD has confirmed plans to release the RX 9070 series in early March. Additionally, rumors suggest that the RTX 5070, 5060 Ti, and 5060 could follow soon after.
    0 Commentaires ·0 Parts ·67 Vue
  • OpenAI custom chip project aims to challenge Nvidia's dominance
    www.techspot.com
    In context: Big tech companies and AI startups still largely rely on Nvidia's chips to train and operate the most advanced AI models. However, that could change fast. OpenAI is spearheading a massive industry-wide effort to bring cheaper custom AI accelerators to market. If successful, this push could weaken Nvidia's dominance in the AI hardware space, pushing the company into a tougher market. OpenAI is nearing the launch of its first custom-designed AI chip. Reuters expects the company to send the chip design to TSMC in the coming months for validation before mass production begins in 2026. The chip has reached the tape-out stage, but OpenAI will likely need a significantly larger workforce to achieve full self-reliance in the AI accelerator market.The custom chip was designed by a "small" in-house team led by Richard Ho, who left Google to join OpenAI over a year ago. The 40-person team collaborated with Broadcom, a controversial company with a well-known track record for creating custom ASIC solutions. The two companies began negotiating a chip-focused partnership in 2024, with the ultimate goal of building new AI chips.Industry sources said OpenAI's design can both train and run AI models, but the company will initially use it in limited quantities for AI inferencing tasks only. TSMC will manufacture the final chip on its 3nm technology node, and OpenAI expects it to include a certain amount of high-bandwidth memory, like any other major AI (or GPU) silicon design.Despite playing a minor role in the company's infrastructure for the next few months, OpenAI's chip could become a significant disruptive force in the near future. The new design will need to pass the tape-out stage with flying colors first, and Ho's team will need to fix any hardware bugs discovered during the initial manufacturing tests.Many tech companies are actively working to replace Nvidia products with their own custom solutions for AI acceleration, but the GPU maker still holds around 80 percent of the market. Microsoft, Google, Meta, and other Big Tech giants are employing hundreds of engineers to solve the silicon problem, with OpenAI coming in last both in timing and workforce size. // Related StoriesSimply put, OpenAI will need much more than its small in-house team led by Richard Ho currently working on its AI chip prototype. Internally, the chip project is seen as a crucial tool for future strategic moves in the growing AI sector. While still waiting for design validation from TSMC, OpenAI engineers are already planning more advanced iterations for broader adoption.
    0 Commentaires ·0 Parts ·68 Vue
  • Apple fixes another actively exploited zero-day vulnerability on iPhones and iPads
    www.techspot.com
    In a nutshell: Since last fall, Apple has released multiple critical security updates for its devices. The latest update addresses targeted attacks that can disable a security feature Apple first introduced for iPhones and iPads several years ago. The patch is also available for Mac, Apple Watch, and Apple Vision Pro. After updating, users should check if Apple Intelligence is enabled. Users who haven't updated their iPhone or iPad firmware since late January should do so now. The iOS and iPadOS 18.3.1 update fixes an actively exploited zero-day vulnerability. The security update is also available for iPadOS 17.7.5, watchOS, macOS, and visionOS. The patch supports all devices going as far back as iPhone XS, iPad Pro 12.9-inch (3rd generation), 11-inch (1st generation), iPad Air (3rd generation), iPad (7th generation), and iPad mini (5th generation).According to Apple's security support page, the flaw (CVE-2025-24200) enabled a sophisticated physical attack targeting specific individuals that could disable USB Restricted Mode. The company credits Bill Marczak of the University of Toronto's Munk School's Citizen Lab for the discovery.Apple introduced USB Restricted Mode in 2018 to protect against device cracking or other malicious hardware. It disables USB data transfers to iPhones and iPads if the devices haven't been unlocked in a week, allowing connections only for charging.A similar function, called "inactivity reboot," debuted with iOS 18.1 late last year. It causes devices to reboot after three days of inactivity, preventing thieves and law enforcement from cracking them. Apple also recently removed dozens of iOS apps found to contain malware that could read screenshots to steal cryptocurrency wallet info.There is one possible minor hitch with the update. Some users reported that macOS Sequoia version 15.3.1 re-enabled Apple Intelligence. Those affected saw the welcome screen after rebooting their devices. Users who disabled Apple Intelligence, Apple's built-in answer to ChatGPT, should check if the feature stayed disabled after installing the updates by navigating to Settings > Apple Intelligence & Siri. // Related StoriesApple Intelligence became opt-out with the OS security updates released in late January, including iOS and iPadOS 18.3, drawing complaints from users wary of GenAI. Cupertino's take on the technology allows users to receive summarized notifications, automatically rewrite text, and generate images. However, Apple disabled news summaries after criticism from the BBC over hallucinations.
    0 Commentaires ·0 Parts ·71 Vue
  • Microsoft proposes new Office and Teams pricing to avoid massive EU fine
    www.techspot.com
    In a nutshell: Recent antitrust laws approved in the EU provide the European Commission with significant firepower against monopoly-loving corporations. It now has its sights set on Microsoft, which is hoping to resolve the legal dispute with a new pricing policy. Microsoft is looking into a potential change in how the company sells Office and Teams in a single package to appease European authorities and avoid a hefty antitrust fine. Three sources familiar with the matter confirmed Microsoft's diplomatic attempt to Reuters. The insiders said the company is trying to end a years-long investigation into its alleged anticompetitive practices with Office and Teams bundles.The Europen Commission investigation started five years ago after Slack filed an antitrust complaint against Microsoft. The complaint claimed that Redmond was reverting to its past monopolistic behavior by boosting Teams adoption through Office integration. Teams and other collaboration or video conferencing services saw a significant surge in demand during the COVID-19 pandemic. Microsoft was allegedly reaping all the benefits thanks to its popular productivity suite.Brussels received an additional antitrust complaint in 2023 when German videoconferencing company Alfaview asked EU antitrust authorities to stop Microsoft from bundling Office and Teams. Europe imposed a 2.2 billion fine against Microsoft a couple of decades ago, and new penalties can now go up to 10 percent of a company's global yearly revenue.Microsoft started to sell an "unbundled" version of Teams without Office in 2023. Sources say Redmond is willing to go even further, offering a wider price difference between an Office and Teams bundle and the two tools sold independently. Microsoft added Teams to Office 365 in 2017. It eventually replaced Skype for Business for 365 users' videoconferencing and collaboration needs.Reuters notes that the European Commission asked some companies for feedback regarding Microsoft's offering. They have until this week to reply. After that, the EU could perform a "formal" market test using the new prices, before evaluating Redmond's proposal. // Related StoriesAn EC insider stated that the Commission would like to move on and employ its staffing and resources against different "enemies." By accepting Microsoft's offer, the EU could focus on its latest antitrust investigations against Apple and Google.
    0 Commentaires ·0 Parts ·65 Vue
  • www.techspot.com
    Reviewers LikedSeamless iOS integrationAdds ANC, transparency modes, spatial audioSlimmer ear hook, smaller caseAccurate hear rate monitoringWorks well with iOS and Android OSH2 chip adds smartsPhysical buttonsExcellent battery lifeComfortable, secure fitReviewers Didn't LikeUnimpressive audio performanceCase is still pretty massiveANC is not quite as good as competitorsHeart-rate support is limited at launch on iOSHeart rate is not that useful if you're an Apple Watch userHigh priceCompetitors and Related Products Our editors hand-pick related products using a variety of criteria: direct competitors targeting the same market segment, or devices that are similar in size, performance, or feature sets. Expert reviews and ratings 60 The fitness-focused Beats Powerbeats Pro 2 can monitor your heart rate but otherwise don't live up to similarly priced wireless earbuds when it comes to noise cancellation and sound quality. By PCMag on February 11, 2025 90 The result of all this tech is a compelling and versatile package that should be a top contender for budding athletes and beyond. With tons of features, punchy sound, and serious comfort in a sport-friendly design, the Powerbeats Pro 2 are an impressive second coming and the best Beats Buds you can buy right now. By Wired on February 11, 2025 80 The Powerbeats Pro 2 make a strong case against yearly upgrades. Im not saying Beats should wait another six years before another refresh, but thanks to that wait, theres a clear difference from the original Powerbeats. The addition of ANC is a good enough reason to cop a pair. By The Verge on February 11, 2025 80 The Powerbeats Pro 2 are comfortable, secure, and full of features that make them perfect for working out, or even everyday use. By DigitalTrends on February 11, 2025 80 The Powerbeats Pro 2 put fitness first, with a secure, comfortable fit and the addition of a heart rate monitor. There are some small hiccups in the sound department, but very solid battery life and a unique look and style bring them firmly back on track to make some compelling fitness buds that can do almost everything. By Tom's Guide on February 11, 2025 79 Apples first earbuds with heart-rate tracking arent AirPods, but they offer a lot of the same smarts via a major design overhaul from Beats. By Engadget on February 11, 2025 88 While I personally had some issues with the included ear tips, I do think the Powerbeats Pro 2 are improved in every way from their predecessor and people who liked the originals will be impressed with this next-gen version. By cnet on February 11, 2025 90 A much-improved pair of fitness earbuds perfectly tuned for the Apple ecosystem. The addition of ANC and heart rate monitor make the Powerbeats Pro 2 the smart choice for the athletic iPhone user. By Gizmodo on February 11, 2025 80 The Powerbeats Pro 2 offers a compelling mix of fitness tracking and premium audio. They excel in comfort, design, and battery life, making them one of the best sports headphones available. However, while heart rate tracking is undoubtedly accurate, we encountered consistent pairing issues, especially on iOS. If Beats can finesse the heart rate features, the Powerbeats Pro 2 could become a top alternative to smartwatches and chest straps. In their current state, though, they're only a must-consider for those craving top audio features in a secure fit. By Wareable on February 11, 2025 90 If you're an iPhone owner and in the market for a new pair of earbuds that are focused on fitness, get these. If you already have a pair you love and were considering switching to these for the heart rate monitors, unless the apps I mentioned earlier are your go-to fitness apps, you may want to wait until there's more support for the Powerbeats Pro 2 heart rate monitoring. As it is, just one fix will open up the ecosystem: support from WatchOS. By Pocket-Lint on February 11, 2025Load More Reviews
    0 Commentaires ·0 Parts ·57 Vue
Plus de lecture