• This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment

    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images.
    A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions.

    In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems.
    Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities.

    The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs.
    Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista.

    This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI.

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE: A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap
    #this #paper #introduces #mathcodervl #figcodifier
    This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment
    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images. A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions. In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems. Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities. The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs. Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista. This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE: A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap #this #paper #introduces #mathcodervl #figcodifier
    WWW.MARKTECHPOST.COM
    This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment
    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images. A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions. In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems. Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities. The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs. Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista. This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE (Parallel Scaling): A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap
    0 Σχόλια 0 Μοιράστηκε
  • Razer Thins Out the Blade 14 and Fattens the Price Tag

    The newly announced Razer Blade 14 isn’t quite stiletto-thin, but it’s becoming far more knife-like over time. Compared to past iterations, the shell is now as tall as 10 pennies stacked on top of each other, which means the new blade might be as thin as your wallet after buying one. On top of being the thinnest Blade 14 Razer has ever made, it’s also the most expensive, starting at for a version with Nvidia’s new GeForce RTX 5060 laptop GPU. Razer’s always tried to offer quality for its high price, but with tariffs in effect, the new Blade 14 is pushing what consumers can expect from gaming laptops. The Blade 14’s base price is more than you would have paid for the 2024 Blade 14 with an RTX 4060. If you upgrade the new Blade to the version with the RTX 5070, Razer told us you could spend for the sake of a laptop that’s “not a big brick,” as the company put it. Razer is always an enticing buy because of its generally strong build quality, but even a frisbee-light frame doesn’t take the sting out of today’s tariff-inflated product prices. © Razer Razer’s new ultra-thin design houses an AMD Ryzen AI 9 365 and up to 64GB of 8,000MHz LPDDR5X RAM. It’s the first time Razer is pairing a Copilot+ CPU made for lightweight laptops with an Nvidia GPU in a thoroughbred gaming machine, which means it’s compatible with Windows 11 AI features like Recall. The laptop sports a bevy of ports, including HDMI and a microSD card slot. As per usual, Razer promotes its hardy aluminum with an anodized black finish that will manage to stave off bumps or blemishes.

    We don’t doubt that all that combined will offer enough juice to showcase the Blade 14’s 3K, 120Hz OLED display. We do wonder what kind of CPU performance it might provide compared to a similarly sized laptop like the Asus TUF Gaming A14 and its top-end AMD Ryzen AI 9 HX 370. Asus’ older model also costs hundreds of dollars less than Razer’s latest. Razer promised you’ll get 2 to 3 hours while gaming on the 72WHr battery. That doesn’t sound like much, but it’s technically better than what you already get on the Razer Blade 16 from this year. We’ve yet to find a gaming laptop with a battery life that will keep up for extended periods. The Blade 14 powers the RTX 5070 up to the max 115W TGP, which may give you enough juice for most modern games at the max 2,880 x 1,800 resolution. The RTX 5060 laptop GPU is still so new, we don’t yet know how it performs compared to Nvidia’s mid-range graphics options. No matter which GPU you choose, the machine will still support a six-speaker system through upward-firing speakers. That may offer better sound quality than you may be used to on such small systems, especially with support for THX Spatial Audio. There has been a rash of relatively light gaming laptops from 2024 stretching into this year. Razer seemingly knew it needed to step up its game with the 2025 edition of the Blade 14. At 0.62-inch thickness and weighing in at 3.59 pounds, it’s 11% thinner and lighter than the 2024 edition. It takes the same thermal hood design from this year’s rendition of the Blade 16. That laptop also went on a diet for the sake of customers who want a less hefty device to fit a little bit better in their backpacks. We found it also tended to get rather hot when playing intensive games, so we hope that’s less of a problem with a smaller battery and less demanding GPU.

    The one thing that hasn’t been brought over from the Blade 16 is the improved keyboard. It’s a Razer device, so of course it’s packed to the gills with gamer lights, including per-key RGB. Those keys still only have 1 mm of key travel compared to the deeper, more impactful 1.5mm on the redesigned 16-incher. There are no color options save for black and white, as much as we might beg Razer to bring back the “coral” pink color from the 2019 Razer Blade Stealth. There’s nothing wrong with a thin system, but perhaps a pink blade would help take away the sting of price hikes.
    #razer #thins #out #blade #fattens
    Razer Thins Out the Blade 14 and Fattens the Price Tag
    The newly announced Razer Blade 14 isn’t quite stiletto-thin, but it’s becoming far more knife-like over time. Compared to past iterations, the shell is now as tall as 10 pennies stacked on top of each other, which means the new blade might be as thin as your wallet after buying one. On top of being the thinnest Blade 14 Razer has ever made, it’s also the most expensive, starting at for a version with Nvidia’s new GeForce RTX 5060 laptop GPU. Razer’s always tried to offer quality for its high price, but with tariffs in effect, the new Blade 14 is pushing what consumers can expect from gaming laptops. The Blade 14’s base price is more than you would have paid for the 2024 Blade 14 with an RTX 4060. If you upgrade the new Blade to the version with the RTX 5070, Razer told us you could spend for the sake of a laptop that’s “not a big brick,” as the company put it. Razer is always an enticing buy because of its generally strong build quality, but even a frisbee-light frame doesn’t take the sting out of today’s tariff-inflated product prices. © Razer Razer’s new ultra-thin design houses an AMD Ryzen AI 9 365 and up to 64GB of 8,000MHz LPDDR5X RAM. It’s the first time Razer is pairing a Copilot+ CPU made for lightweight laptops with an Nvidia GPU in a thoroughbred gaming machine, which means it’s compatible with Windows 11 AI features like Recall. The laptop sports a bevy of ports, including HDMI and a microSD card slot. As per usual, Razer promotes its hardy aluminum with an anodized black finish that will manage to stave off bumps or blemishes. We don’t doubt that all that combined will offer enough juice to showcase the Blade 14’s 3K, 120Hz OLED display. We do wonder what kind of CPU performance it might provide compared to a similarly sized laptop like the Asus TUF Gaming A14 and its top-end AMD Ryzen AI 9 HX 370. Asus’ older model also costs hundreds of dollars less than Razer’s latest. Razer promised you’ll get 2 to 3 hours while gaming on the 72WHr battery. That doesn’t sound like much, but it’s technically better than what you already get on the Razer Blade 16 from this year. We’ve yet to find a gaming laptop with a battery life that will keep up for extended periods. The Blade 14 powers the RTX 5070 up to the max 115W TGP, which may give you enough juice for most modern games at the max 2,880 x 1,800 resolution. The RTX 5060 laptop GPU is still so new, we don’t yet know how it performs compared to Nvidia’s mid-range graphics options. No matter which GPU you choose, the machine will still support a six-speaker system through upward-firing speakers. That may offer better sound quality than you may be used to on such small systems, especially with support for THX Spatial Audio. There has been a rash of relatively light gaming laptops from 2024 stretching into this year. Razer seemingly knew it needed to step up its game with the 2025 edition of the Blade 14. At 0.62-inch thickness and weighing in at 3.59 pounds, it’s 11% thinner and lighter than the 2024 edition. It takes the same thermal hood design from this year’s rendition of the Blade 16. That laptop also went on a diet for the sake of customers who want a less hefty device to fit a little bit better in their backpacks. We found it also tended to get rather hot when playing intensive games, so we hope that’s less of a problem with a smaller battery and less demanding GPU. The one thing that hasn’t been brought over from the Blade 16 is the improved keyboard. It’s a Razer device, so of course it’s packed to the gills with gamer lights, including per-key RGB. Those keys still only have 1 mm of key travel compared to the deeper, more impactful 1.5mm on the redesigned 16-incher. There are no color options save for black and white, as much as we might beg Razer to bring back the “coral” pink color from the 2019 Razer Blade Stealth. There’s nothing wrong with a thin system, but perhaps a pink blade would help take away the sting of price hikes. #razer #thins #out #blade #fattens
    GIZMODO.COM
    Razer Thins Out the Blade 14 and Fattens the Price Tag
    The newly announced Razer Blade 14 isn’t quite stiletto-thin, but it’s becoming far more knife-like over time. Compared to past iterations, the shell is now as tall as 10 pennies stacked on top of each other, which means the new blade might be as thin as your wallet after buying one. On top of being the thinnest Blade 14 Razer has ever made, it’s also the most expensive, starting at $2,300 for a version with Nvidia’s new GeForce RTX 5060 laptop GPU. Razer’s always tried to offer quality for its high price, but with tariffs in effect, the new Blade 14 is pushing what consumers can expect from gaming laptops. The Blade 14’s base price is $100 more than you would have paid for the 2024 Blade 14 with an RTX 4060. If you upgrade the new Blade to the version with the RTX 5070, Razer told us you could spend $2,700 for the sake of a laptop that’s “not a big brick,” as the company put it. Razer is always an enticing buy because of its generally strong build quality, but even a frisbee-light frame doesn’t take the sting out of today’s tariff-inflated product prices. © Razer Razer’s new ultra-thin design houses an AMD Ryzen AI 9 365 and up to 64GB of 8,000MHz LPDDR5X RAM. It’s the first time Razer is pairing a Copilot+ CPU made for lightweight laptops with an Nvidia GPU in a thoroughbred gaming machine, which means it’s compatible with Windows 11 AI features like Recall (which you should probably remember to turn off during setup). The laptop sports a bevy of ports, including HDMI and a microSD card slot. As per usual, Razer promotes its hardy aluminum with an anodized black finish that will manage to stave off bumps or blemishes. We don’t doubt that all that combined will offer enough juice to showcase the Blade 14’s 3K, 120Hz OLED display. We do wonder what kind of CPU performance it might provide compared to a similarly sized laptop like the Asus TUF Gaming A14 and its top-end AMD Ryzen AI 9 HX 370. Asus’ older model also costs hundreds of dollars less than Razer’s latest. Razer promised you’ll get 2 to 3 hours while gaming on the 72WHr battery. That doesn’t sound like much, but it’s technically better than what you already get on the Razer Blade 16 from this year. We’ve yet to find a gaming laptop with a battery life that will keep up for extended periods. The Blade 14 powers the RTX 5070 up to the max 115W TGP, which may give you enough juice for most modern games at the max 2,880 x 1,800 resolution. The RTX 5060 laptop GPU is still so new, we don’t yet know how it performs compared to Nvidia’s mid-range graphics options. No matter which GPU you choose, the machine will still support a six-speaker system through upward-firing speakers. That may offer better sound quality than you may be used to on such small systems, especially with support for THX Spatial Audio. There has been a rash of relatively light gaming laptops from 2024 stretching into this year. Razer seemingly knew it needed to step up its game with the 2025 edition of the Blade 14. At 0.62-inch thickness and weighing in at 3.59 pounds, it’s 11% thinner and lighter than the 2024 edition. It takes the same thermal hood design from this year’s rendition of the Blade 16. That laptop also went on a diet for the sake of customers who want a less hefty device to fit a little bit better in their backpacks. We found it also tended to get rather hot when playing intensive games, so we hope that’s less of a problem with a smaller battery and less demanding GPU. The one thing that hasn’t been brought over from the Blade 16 is the improved keyboard. It’s a Razer device, so of course it’s packed to the gills with gamer lights, including per-key RGB. Those keys still only have 1 mm of key travel compared to the deeper, more impactful 1.5mm on the redesigned 16-incher. There are no color options save for black and white, as much as we might beg Razer to bring back the “coral” pink color from the 2019 Razer Blade Stealth. There’s nothing wrong with a thin system, but perhaps a pink blade would help take away the sting of price hikes.
    0 Σχόλια 0 Μοιράστηκε