• TOWARDSDATASCIENCE.COM
    Reducing Time to Value for Data Science Projects: Part 1
    Introduction The Experimentation and development phase of a data science project is where data scientists are meant to shine. Trying out different data treatments, feature combinations, model choices etc. all factor into arriving at a final setup that will form the proposed solution to your business needs. The technical capability required to carry out these experiments and critically evaluate them are what data scientists were trained for. The business relies on data scientists to deliver solutions ready to be productionised as quickly as possible; the time taken for this is known as time to value. Despite all this I have found from personal experience that the experimentation phase can become a large time sink, and can threaten to completely derail a project before its barely begun. The over-reliance on Jupyter Notebooks, experiment parallelization by manual effort, and poor implementation of software best practises: these are just a few reasons why experimentation and the iteration of ideas end up taking significantly longer than they should, hampering the time taken to begin delivering value to a business. This article begins a series where I want to introduce some principles that have helped me to be more structured and focussed in my approach to running experiments. The result of this have allowed me to streamline my ability to execute large-scale parallel experimentation, freeing up my time to focus on other areas such as liaising with stakeholders, working with data engineering to source new data feeds or working on the next steps for productionisation. This has allowed me to reduce the time to value of my projects, ensuring I deliver to the business as quickly as possible. We Need To Talk About Notebooks Jupyter Notebooks, love them or hate them, are firmly entrenched in the mindset of every data scientist. Their ability to interactively run code, create visualisations and intersperse code with Markdown make them an invaluable resource. When moving onto a new project or faced with a new dataset, the first steps are almost always to spin up a notebook, load in the data and start exploring. Using a notebook in a clean and clear manner. Image created by author. While bringing great value, I see notebooks misused and mistreated, forced to perform actions they are not suited to doing. Out of sync codeblock executions, functions defined within blocks and credentials / API keys hardcoded as variables are just some of the bad behaviours that using a notebook can amplify. Example of bad notebook habits. Image created by author. In particular, leaving functions defined within notebooks come with a host of problems. They cannot be tested easily to ensure correctness and that best practises have been applied. They also can only be used within the notebook itself and so there is a lack of cross-functionality. Breaking free of this coding silo is critical in running experiments efficiently at scale. Local vs Global Functionality Some data scientists are aware of these bad habits and instead employ better practises surrounding developing code, namely: Develop within a notebook Extract out functionality into a source directory Import function for use within the notebook This approach is a significant improvement compared to leaving them defined within a notebook, but there is still something lacking. Throughout your career you will work across multiple projects and write lots of code. You may want to re-use code you have written in a previous project; I find this is quite common place as there tends to be a lot of overlap between work. The approach I see in sharing code functionality ends up being the scenario where it is copy+pasted wholesale from one repository to another. This creates a headache from a maintainability perspective, if issues are found in one copy of these functions then there is a significant effort required to find all other existing copies and ensure fixes are applied. This poses a secondary problem when your function is too specific for the job at hand, and so the copy+paste also requires small modifications to change its utility. This leads to multiple functions that share 90% identical code with only slight tweaks. Similar functions bloat your script for little gain. Image created by author. This philosophy of creating code in the moment of requirement and then abstracting out into a local directory also creates a longevity problem. It becomes increasingly common for scripts to become bloated with functionality with little to no cohesion or relation to each other. Storing all functionality into a single script is not sustainable. Image created by author. Taking time to think about how and where code should be stored can lead to future success. Looking beyond your current project, start considering about what can be done with your code now to make it future-proof. To this end I suggest creating an external repository to host any code you develop with the aim of having deployable building blocks that can be chained together to efficiently answer business needs. Focus On Building Components, Not Just Functionality What do I mean by having building blocks? Consider for example the task of carrying out various data preparation techniques before feeding it into a model. You need to consider aspects like dealing with missing data, numerical scaling, categorical encoding, class balancing (if looking at classification) etc. If we focus in on dealing with missing data, we have multiple methods available for this: Remove records with missing data Remove features with missing data (possibly above a certain threshold) Simple imputation methods (e.g. zero, mean) Advanced imputation methods (e.g. MICE) If you are running experiments and want to try out all these methods, how do you go about it? Manually editing codeblocks between experiments to switch out implementations is straightforward but becomes a management nightmare. How do you remember which code setup you had for each experiment if you are constantly overwriting? A better approach is to write conditional statements to easily switch between them. Having this defined within the notebook still bring issues around re-usability. The implementation I recommend is to abstract all this functionality into a wrapper function with an argument that lets you choose which treatment you want to carry out. In this scenario no code needs to be changed between experiments and your function is general and can applied elsewhere. Three methods of switching between different data treatments. Image created by author. This process of abstracting implementation details will help to streamline your data science workflow. Instead of rebuilding similar functionality or copy+pasting pre-existing code, having a code repository with generalised components allows it to be re-used trivially. This can be done for lots of different steps in your data transform process and then chained together to form a single cohesive functionality: Different data transformations can be added to create a cohesive pipeline. Image created by author. This can be extended for not just different data transformations, but for each step in the model creation process. The change in mindset from building functions to accomplish the task at hand vs designing a re-usable multi-purpose code asset is not an easy one. It requires more initial planning about implementation details and expected user interaction. It is not as immediately useful as having code accessible to you within your project. The benefit is that in this scenario you only need to write up the functionality once and then it is available across any project you may work on. Design Considerations When structuring this external code repository for use there are many design decisions to think about. The final configuration will reflect your needs and requirements, but some considerations are: Where will different components be stored in your repository? How will functionality be stored within these components? How will functionality be executed? How will different functionality be configured when using the components? This checklist is not meant to be exhaustive but serves as a starter for your journey in designing your repository. One setup that has worked for me is the following: Have a separate directory per component. Image created by author. Have a class that contains all the functionality a component needs. Image created by author. Have a single execution method that carries out the steps. Image created by author. Note that choosing which functionality you want your class to carry out is controlled by a configuration file. This will be explored in a later article. Accessing the methods from this repository is straightforward, you can: Clone the contents, either to a separate repository or as a sub-repository of your project Turn this centralised repository into an installable package Easily import and call execution methods. Image created by author. A Centralised, Neutral Repository Allows More Powerful Tools To Be Built Collaboratively Having a toolbox of common data science steps sounds like a good idea, but why the need for the separate repository? This has been partially answered above, where the idea of decoupling implementation details from business application encourages us to write more flexible code that can be redeployed in a variety of different scenarios. Where I see a real strength in this approach is when you don’t just consider yourself, but your teammates and colleagues within your organisation. Imagine the volume of code generated by all the data scientists at your company. How much of this do you think would be truly unique to their project? Certainly some of it of course, but not all of it. The volume of re-implemented code would go unnoticed, but it would quickly add up and become a silent drain on resources.Now consider the alternative where a central location of common data scientist tools are located. Having functionality that covers steps like data quality, feature selection, hyperparameter tuning etc. immediately available to be used off the shelf will greatly speed up the rate at which experimentation can begin. Using the same code opens up the opportunity to create more reliable and general purpose tools. More users increase the probability of any issues or bugs being detected and code being deployed across multiple projects will enforce it to be more generalised. A single repository only requires one suite of tests to be created, and care can be taken to ensure they are comprehensive with sufficient coverage. As a user of such a tool, there may be cases where the functionality you require is not present in the codebase. Or alternatively you have a particular technique you like to use that is not implemented. While you could choose to not use this centralised code repository, why not contribute to it? Working together as a team or even as a whole company to actively contribute and build up a centralised repository opens up a whole host of possibilities. Leveraging the strength of each data scientist as they contribute the techniques they routinely use, we have an internal open-source scenario that fosters collaboration among colleagues with the end goal of speeding up the data science experimentation process. Conclusion This article has kicked off a series where I address common data science mistakes I have seen that greatly inhibit the project experimentation process. The consequence of this is that the time taken to deliver value is greatly increased, or in extreme cases no value is delivered as the project fails. Here I focussed on ways of writing and storing code that is modular and decoupled from a particular project. These components can be re-used across multiple projects allowing solutions to be developed faster and with greater confidence in the results. Developing such a code repository can be open sourced to all members of an organisation, allowing powerful, flexible and robust tools to be built. The post Reducing Time to Value for Data Science Projects: Part 1 appeared first on Towards Data Science.
    0 Комментарии 0 Поделились 38 Просмотры
  • MASHABLE.COM
    Google and Duolingo think AI can change the way we learn languages. Are they right?
    This bird is now powered by AI. Credit: Cheng Xin/Getty Images AI continues to expand its reach into our lives, and language learning is next on the list.This week brought big developments from both Google and Duolingo on that front. On Google's end, the search giant launched new Gemini-powered AI tools for users to learn foreign languages. Dubbed Little Language Lessons, the experimental feature offers three interactive lessons that "personalize language learning." For instance, "Tiny Lesson" can help you learn phrases for specific situations (such as losing your passport), while "Slang Hang" helps users learn local slang for less-stuffy conversation. Finally, "Word Cam" lets Gemini look at objects in your photos and labels them in the language you're learning.Duolingo, on the other hand, is going full speed ahead with generative AI. The company announced this week that it would stop relying on human contractors for "work that AI can handle," while also committing to using AI in hiring and performance reviews. On top of that, Duolingo announced on Wednesday that it used generative AI to come up with 148 new language learning courses, doubling its total course offerings.The large language models behind Google Gemini and other popular AI tools have proved particularly adept at translation. Duolingo certainly believes the technology has big potential for language learning. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Of course, you don't have to look far to find people on social media responding negatively to the news. On X, some users are asking their peers to delete the app for going all-in on AI language learning. Learning another language is an inherently social activity, something a person usually does if they want to interact with other humans on a deeper level. Practically speaking, language learning usually requires human-to-human engagement.For its part, Google says it's not trying to replace human instruction. A Google blog post reads, "These experiments aren’t about replacing traditional study, but about complementing it: helping people build habits, stay engaged, and weave learning into their everyday lives." Topics Artificial Intelligence
    0 Комментарии 0 Поделились 51 Просмотры
  • ME.PCMAG.COM
    Asus Vivobook Go 15
    Pros Inexpensive for a full Windows 11 PCDecent chassis build quality for the priceLong battery life Cons Dim, grim screenOutdated processorKeyboard is uncomfortable and not backlitSupports Wi-Fi 5, not Wi-Fi 6, 6E, or 7Poor speakers Asus Vivobook Go 15 (E1504FA-AS54) Specs Boot Drive Capacity (as Tested) 512 Boot Drive Type SSD Class Budget Dimensions (HWD) 0.70 by 14.19 by 9.15 inches Graphics Processor AMD Radeon 610M Graphics Native Display Resolution 1920 by 1080 Operating System Windows 11 Home Panel Technology TN Processor AMD Ryzen 5 7520U Processor Speed 2.8 RAM (as Tested) 8 Screen Refresh Rate 60 Screen Size 15.6 Tested Battery Life (Hours:Minutes) 13:27 Variable Refresh Support None Weight 3.59 Wireless Networking 802.11ac Wireless Networking Bluetooth 5.1 All Specs Table of ContentsDesign: Budget With a Touch of ClassFeatures: Affordability Comes With SacrificesPerformance Tests: This Might Take a WhileProductivity and Content Creation TestsGaming and Graphics TestsBattery Life and Display Tests The ideal budget laptop blends everyday usability, reliable performance, and a touch of creature comfort. Essentially, it should be functional without leading to buyer's remorse for not spending more. The Asus Vivobook Go 15 (in the model E1504FA-AS54 we tested) looks promising on paper, with a full-featured Windows OS—not the limited Windows in S Mode—and a super-low starting price of $299 for a base model (and $382 in our test configuration). Unfortunately, outdated hardware and significant usability issues make it difficult to recommend even for basic use. For better value, consider any pick in our budget laptop roundup, or explore Chromebook options if you can forgo Windows.Design: Budget With a Touch of ClassThe Vivobook Go 15's silvery exterior lends it a stylish appearance that belies its economical price. The sturdy plastic exhibits minimal flex, even when I unwisely lifted it by a corner. Although the lid displays a bit more flex, it remains within acceptable limits. The somewhat ostentatious Vivobook badge on the lid doesn't detract from its overall classy look.(Credit: Joseph Maldonado)Measuring 0.7 by 14.2 by 9.2 inches, the Vivobook Go 15 shares similar dimensions with the Lenovo IdeaPad Slim 3i 15 and is fairly light for a 15.6-inch laptop at 3.6 pounds. Although its screen bezels could be slimmer, the laptop is sleek enough for its price.(Credit: Joseph Maldonado)The port selection includes two USB Type-A ports, one USB Type-C port, an HDMI monitor output, and a 3.5mm audio jack. Notably, one of the Type-A ports is the antique USB 2.0 and the HDMI port is limited to version 1.4, resulting in a flickery 30 instead of 60 frames per second for 4K output. The power adapter relies on a traditional barrel-style connector instead of USB-C.(Credit: Joseph Maldonado)My $382 review model combines a six-core AMD Ryzen 5 7520U processor, AMD Radeon 610M integrated graphics, 8GB of RAM, and a 512GB solid-state drive. Despite its contemporary name, the processor is a relic, built on AMD's "Zen 2" architecture that debuted in 2019. This model's performance is also hindered by its 8GB of non-upgradable, soldered memory. The inclusion of Wi-Fi 5 and Bluetooth 4.2 further underscores the rather retro nature of this notebook.(Credit: Joseph Maldonado)Asus backs the Vivobook Go 15 with a one-year warranty. The laptop comes with minimal preinstalled software. The MyAsus app features system updates, diagnostics, and customer support access. Device settings include a battery-care mode that limits the charge to 80% and a blue-light filter for the display. Additionally, there's a whisper mode that reduces performance to lower fan noise, although I didn't find the normal fan noise to be intrusive. The laptop rarely became more than lukewarm to the touch.Features: Affordability Comes With SacrificesIf you're comparing the Vivobook Go 15 to a more premium (or even slightly less thrifty) laptop, two things will be obvious. First, the keyboard experience is subpar—it doesn't have backlighting and offers insufficient cushioning, leading to a harsh and tiring typing experience. Despite this, I managed to reach near my top speed in the MonkeyType online typing test, achieving 117 words per minute with 99% accuracy. Additionally, the keyboard has layout inefficiencies, offering no dedicated Home, End, Page Up, and Page Down keys and using a nonstandard three-column number pad. At least it provides a Function Lock feature to make the F1 through F12 keys primary.(Credit: Joseph Maldonado)The other significant issue is the screen. On paper, it doesn't seem too bad, with 1,920-by-1,080-pixel resolution instead of chintzy 1,366-by-768 and an antiglare surface. However, one glance at the washed-out picture is enough to cause instant regret. Apparently, old TN-style panels haven't been completely phased out; you must view this display head-on, or the colors invert and the screen becomes unusable, which is a shame since the lid can be opened 180 degrees. Colors appear dull and unnatural—red, for instance, looks more like orange—while low peak brightness only compounds the poor quality.(Credit: Joseph Maldonado)One minor upside is the webcam above the display, which provides decent picture quality despite its 720p resolution and includes a sliding privacy shutter. It doesn't support Windows Hello face recognition, but biometric features aren't expected at this price. The touchpad is average-sized and unremarkable. The speakers are subpar; when I played Kelly Clarkson's "Catch My Breath," the bass notes were flat, and the vocals sounded tinny.(Credit: Joseph Maldonado)While the Vivobook felt responsive enough when typing this review, watching YouTube videos, and web surfing, the next section will reveal that its performance is only adequate for such undemanding activities. The 8GB of RAM, which as I noted isn't upgradable, limits its multitasking capabilities. It's fine for emailing and basic browsing, but don't leave dozens of tabs open.Performance Tests: This Might Take a WhileTo recap, I reviewed the $382 Asus Vivobook Go 15 model E1504FA-AS54, which features an AMD Ryzen 5 7520U processor (six cores, up to 4.3GHz boost), Radeon 610M integrated graphics, 8GB of RAM, and a 512GB SSD. Asus also offers an even more affordable model, the E1504FA-AS33, which comes with a Ryzen 3 7320U chip and 128GB of storage, priced at $299.Recommended by Our EditorsWe haven't tested many ultra-budget laptops like this one recently. However, several of our comparison systems are similarly priced in their baseline configurations. These include the Dell Inspiron 14 2-in-1 (model 7445; starts at $499 as of this writing), the Lenovo IdeaPad Slim 3i 15 (starts at $429) and IdeaPad Flex 5 14 Gen 7 (starts at $634). Note the models we tested are not the baseline versions. I included the slightly pricier Acer Aspire Vero to fill out the charts, which was $750 as tested.Productivity and Content Creation TestsOur primary overall benchmark, UL's PCMark 10, puts a system through its paces in productivity apps ranging from web browsing to word processing and spreadsheet work. Its Full System Drive subtest measures a PC's storage throughput. Two more tests are CPU-centric or processor-intensive: Primate Labs' Geekbench 6.3 Pro simulates popular apps ranging from PDF rendering and speech recognition to machine learning, and we see how long it takes the video transcoder HandBrake 1.8 to convert a 12-minute clip from 4K to 1080p resolution. We normally run a third processing test, Maxon's image-rendering Cinebench 2024, and an automated Adobe Photoshop workflow, PugetBench for Creators. But the Vivobook Go 15 couldn't complete those benchmarks due to its lack of RAM.The Asus and IdeaPad Flex 5 delivered similar performance in PCMark 10, barely reaching the 4,000 points that we consider decent for everyday use with apps like Microsoft Word and PowerPoint. This is partly because their 8GB of memory is today's barest minimum for multitasking; you want 16GB if possible for a Windows PC nowadays. Additionally, the weakness of the Asus' outdated Ryzen 5 CPU is highlighted in Geekbench, where it fell well behind even the Core i3-powered IdeaPad Flex 5.Gaming and Graphics TestsWe challenge each reviewed system’s graphics with a quartet of animations or gaming simulations from UL's 3DMark test suite. Wild Life (1440p) and Wild Life Extreme (4K) use the Vulkan graphics API to measure GPU speeds. Steel Nomad's regular (4K) and Light (1440p) subtests focus on APIs more commonly used for game development, like Metal and DirectX 12, to assess gaming geometry and particle effects. A fifth test, Solar Bay, emphasizes ray-tracing performance using Vulkan or Metal APIs at 1440p resolution.The Vivobook's aging CPU and integrated Radeon graphics didn't help it in our 3D tests, where it performed well behind the other units. Due to its minimal memory, it couldn't complete the 3DMark Steel Nomad test. As mentioned earlier, this laptop is strictly suited for basic tasks—nothing more.Battery Life and Display TestsWe test each laptop's battery life by playing a locally stored 720p video file (the open-source Blender movie Tears of Steel) with display brightness at 50% and audio volume at 100%. We ensure the battery is fully charged, with Wi-Fi and keyboard backlighting turned off, before the test.To gauge display performance, we also use a Datacolor SpyderX Elite monitor calibration sensor and its Windows software to measure a laptop screen's color saturation—what percentage of the sRGB, Adobe RGB, and DCI-P3 color gamuts or palettes the display can show—and its 50% and peak brightness in nits (candelas per square meter).The Asus performed quite well in battery life, having enough power to last a typical workday. However, its screen quality is down there with other barest-budget laptops, covering only 67% of the sRGB color gamut and maxing out at just 270 nits of brightness. The IdeaPad Slim 3i has similar color coverage but is much brighter. Additionally, the Asus suffers from poor viewing angles due to its TN screen, an issue not captured in our measurements.
    0 Комментарии 0 Поделились 47 Просмотры
  • WWW.MOENGAGE.COM
    Model Context Protocols (MCPs): The Next Big Shift for Lifecycle Marketers?
    Reading Time: 5 minutes AI assistants are getting better at helping marketers work smarter. But there’s still one big challenge: tools and data don’t always talk to each other.  Lifecycle marketers know this struggle all too well. They manage campaigns across multiple platforms and often depend on engineering or BI teams just to get basic tasks done. However, Model Context Protocols (MCPs) could be a major breakthrough. This new open standard makes it possible for AI tools to connect directly with systems like CRMs, ESPs, analytics tools, and more, without needing custom integrations every time. To better understand what MCPs are and how they could reshape lifecycle marketing, we spoke with Aboli Gangreddiwar, a marketing leader who has been exploring MCPs and recently published a deep dive on Lifecycle Luminaries.  In our conversation, we explored how MCPs could potentially streamline marketing workflows, unlock deeper personalization, and even reduce dependency on engineering support. Let’s jump in.   What Are Model Context Protocols and Why Should Marketers Care Think of MCPs like a universal translator for your marketing stack. Just like HTTP allows any browser to access any website, MCPs allow AI agents (like ChatGPT or Claude) to interact with your marketing tools using a shared standard. “I think of it kind of like a USB-C for your marketing stack,” Aboli explained. “If you’re using Databricks as your CDP and Hubspot as your ESP, MCP could potentially let them talk to each other in a more seamless way. You wouldn’t need a BI ticket every time you want a new data field.” You still need to connect each tool to a Model Context Protocol, but once that’s done, any AI agent that supports the protocol can securely access and act on the tool’s data or capabilities, without needing a custom integration for each new agent or use case. This concept is exciting for lifecycle marketing because this function of marketing is very complex. It uses personalized, multi-channel campaigns to engage customers throughout their entire journey by leveraging data and technology. With lifecycle marketing, businesses can deliver targeted messages at the right time and through the right channel, maximizing customer lifetime value and achieving sustainable competitive advantage.  But that’s easier said than done, especially when your systems don’t play nicely together. Some of the most common challenges marketers face today include: Disconnected data across CRMs, ESPs, CDPs, and analytics tools Slow campaign execution due to manual workflows and tool-hopping Dependence on engineering for things like data access, new integrations, or field availability Time-consuming analysis of A/B tests and campaign performance “These pain points are why the potential of MCPs feels so exciting,” Aboli said. “If we could use a single interface to access campaign data, build emails, QA them, and even send them, it would fundamentally change the way we work.”   What Model Context Protocols Could Unlock The real power of MCPs is in how they can bring together all the disconnected parts of a marketer’s workflow. With the right setup, AI agents could pull data from your CDP, generate content in your ESP, QA the emails in Litmus, and trigger the send, all from a single prompt. “We’ve always had these disconnected steps in lifecycle marketing,” Aboli shared. “Building emails, testing them, pulling performance data. MCPs give you a vision where those tools can operate together more fluidly through natural-language interfaces.” Here are some of the most promising use cases marketers are starting to explore: Smarter segmentation using real-time data from multiple sources More personalized content based on live behavior and approved messaging Automated test analysis, including stat sig calculations and performance summaries Cross-channel orchestration, where AI picks the right message, timing, and channel based on user behavior Aboli pointed out that reporting alone is a huge opportunity. “Imagine your AI agent connects directly to Tableau, pulls campaign performance, calculates statistical significance, and summarizes the results for you. That would save hours of work every week.”   How to Get Started with MCPs as a Marketer: 4 Steps Even though most teams aren’t using MCPs at scale just yet, there are plenty of ways to start preparing and experimenting. Aboli shared a few low-lift ideas for teams who want to get ahead of the curve. 1. Start with documentation “Whatever lives in a marketer’s head needs to be written down,” she said. That includes campaign rules, funnel stages, brand guidelines, and compliance requirements. AI needs this context to be useful. 2. Try custom GPTs for everyday tasks You don’t need deep technical skills to build a custom GPT. Aboli recommends starting with high-volume tasks like email copywriting or proofing. “You can upload your templates, brand voice, common legal disclaimers, even your QA checklist. It won’t get you to 100%, but it can get you to 70 or 80.” 3. Test Zapier’s MCP integration Zapier recently launched MCP support in beta, giving marketers access to thousands of tools through a single interface. “If you’re already using tools like Databricks or BigQuery, you can explore simple automations without involving too much engineering resources,” Aboli said. 4. Start mapping your first agentic use case Not sure where to begin? Look at what’s slowing you down. “One area I see over and over is data activation,” Aboli said. “Marketers want to personalize campaigns but often don’t have access to the data they need. If MCP can help solve that, it’s a great place to start.”   How MCPs Could Reshape the Future of Lifecycle Marketing Model Context Protocols are still in their early stages, but they signal a shift in how lifecycle marketers can approach automation, personalization, and execution. For years, marketers have been forced to work around the limitations of their tools: waiting on data access, dealing with disconnected systems, and relying on manual processes that drain time and energy. MCPs offer a different vision, one where AI agents are not just helpful assistants but true collaborators. By allowing tools to “speak the same language,” MCPs make it possible for marketers to connect insights to action faster than ever before. Whether it’s building a segment, generating personalized email content, or analyzing campaign results, the pieces start to click into place more naturally. Of course, there’s still a gap between where the technology is today and the fully automated, AI-assisted workflows marketers dream about. But the path forward is clear, and it’s more accessible than it might seem. As Aboli emphasized, starting small with documentation, simple GPT setups, or lightweight integrations can provide real value while preparing your team for what’s coming next. Marketers who embrace these tools early will have a distinct advantage. They’ll be the ones moving faster, experimenting more confidently, and unlocking personalization at scale without burning out their teams. And as more vendors build support for Model Context Protocols, the barrier to entry will only continue to drop. “You don’t have to do it all today,” Aboli reminded us. “But you can start now.” If you’re a lifecycle marketer feeling the pressure of disconnected tools and data silos, this may be the moment to lean in, explore the possibilities, and take the first step toward a smarter, more connected future.   To learn more about MoEngage’s AI capabilities and how our platform supports B2C lifecycle marketers across industries, request a demo today. The post Model Context Protocols (MCPs): The Next Big Shift for Lifecycle Marketers? appeared first on MoEngage.
    0 Комментарии 0 Поделились 33 Просмотры
  • WWW.BUILDER.IO
    How to Build Your Own MCP Server
    I don’t know about you, but I find myself switching AI models (surprise Gemini release anybody?) and clients (Cursor, Windsurf, Cursor again—no, wait!) pretty often.What frustrates me more than anything is loss of context. I’m constantly explaining to the AI what it needs to know about my problem and trying to get it to act in “my style” of doing things.But what if that context were portable? What if you could ask a question in Claude Desktop, get an answer, and then recall the conversation later in Cursor when coding?In this article, we’ll do just that, building out the tooling together in just a few quick steps. Here’s how the final product will look:Here’s the complete code for this example project so you can clone it. I recommend following along; the goal is that by the end of this tutorial, you’ll be able to create your own lil’ dream server.Why bother with MCP?What you’re seeing above is, as you may have guessed from the 48px title and borderline-absurd keyword optimization of this post, a Model Context Protocol (MCP) server.If you already know all about MCP and want to get to building, feel free to skip this section and head on down to the “Quick Start.” Otherwise, set your stopwatch—here’s the 3-minute primer.1. Why does MCP exist?If you want autonomous AI agents, you’re gonna need tools that enable them to see and interact with the world around them. Unfortunately, connecting AI assistants directly to tools makes for fragile integrations; update the AI model or the API on either side of the tool, and you get broken code.So, how can we build more robust, reusable AI capabilities?One route is through Anthropic’s Model Context Protocol (MCP). It’s a standardized communication layer (based on JSON-RPC) that allows AI clients (like Cursor) to discover and use external capabilities provided by MCP servers.These capabilities include accessing persistent data (Resources), performing various actions in the outside world (Tools), and receiving specific instructions on how to use those resources and tools (Prompts).For a full exploration of MCP's goals, architecture, and potential, you can read my deep dive.That’s great, but…Clients like Cursor and Windsurf already have great AI agents without MCP. So, why do we need more tools?Put simply, client developers can’t build everything. They don’t want to spend all their development hours tweaking web search for every new model, and they’re definitely not out here trying to roll their own Jira integration.MCP lets service providers like GitHub and Notion maintain their own AI integrations, which means higher-quality interactions and less duplicated effort.So, when you opt into using an MCP server, the main benefits you get are future-proofing and portability. You get an enormous ecosystem of plug-and-play tools that you can bring to any chat window that implements the standard.Okay, but…Even if you’re not a developer who needs to wire up their own service API to MCP, there are a lot of benefits to having the knowhow.For me, I’ve noticed that the more I spend time building servers, the less I feel like my entire job is just copy/pasting huge swaths of text between input boxes. I’m automating context, and it makes AI models more personally useful to me.Plus, it feels like a way to put a stake in the ground with the ever-shifting landscape of AI. Tools I build today should keep working even as new models, clients, and services come around.But enough waxing poetic. Time to roll up our sleeves.Quick start: Vibe code?I’m not gonna lie: If you want to just hand the AI agent the MCP docs and tell it what functionalities you want… well, it’s probably gonna work. This is the kind of code AI is especially good at—it’s boilerplatey.Use the MCP Inspector as you go, and keep feeding errors back to the AI. And check out our best Cursor tips to get the most out of the AI agent.Otherwise, here’s the breakdown for those who want to learn how the architecture works, in order to build scalable AI tools. Quick start, for real: Clone, install, buildLet's get the code base ready with these three steps. We won't worry about API keys or client setup yet.Clone the Repository: Get the example code onto your local machine. Install Dependencies: We need the MCP SDK and a few other libraries. Build the Code: Compile the TypeScript source code into runnable JavaScript.You now have the compiled code in the build/ directory.If you want to grab an OpenRouter API key and head on down to “Running the server with real clients,” you’re more than welcome to. The server will work as is.The core loop: Running your first serverBefore we dive into the specific features of this CSS Tutor example, let's nail down the fundamental structure of any MCP server built with the TypeScript SDK and get a minimal version running.Open the main server file: src/index.ts. You'll see these key parts:Imports: The file brings in McpServer (the core server class) and StdioServerTransport (for communication) from the @modelcontextprotocol/sdk. Registration imports: We import registerPrompts, registerResources, and registerTools from other files in the src/ directory. These functions (which we'll explore later) are responsible for telling the server about the specific capabilities we want to give it. Server instantiation: We create the server instance, setting the server's name and version, and initializing empty placeholders for its capabilities. Calling registrations: The imported register* functions are called: These calls populate the server instance with the actual tools, resources, and prompts defined elsewhere. The main function: This async function sets up the communication transport and connects the server: Execution: Finally, main() is called with basic error handling.This structure is the heart of the server. It initializes, registers capabilities, and connects for communication.To make sure this core loop works without needing any external APIs or complex logic yet, let's temporarily modify src/index.ts:Comment out the capability registration calls: Add a simple "hello" tool right before the main function definition: Re-build the code:With just these changes in src/index.ts, we now have a runnable MCP server that offers only one basic tool. It doesn't do much yet besides offer Empire Strikes Back spoilers, but it confirms the core structure and communication setup is working.Debug with the MCP InspectorNow that we have a minimal, runnable server, how do we check if it's actually speaking MCP correctly? We use Anthropic’s MCP Inspector.This command-line utility acts as a basic MCP client. It launches your server process, connects to it via standard I/O (just like Claude Desktop or Cursor would), and shows you the JSON-RPC messages being exchanged.From your project's root directory, run:npx @modelcontextprotocol/inspector node ./build/index.jsnpx ...inspector: Downloads and runs the inspector package. node: The command to execute your server. ./build/index.js: The path to your compiled server entry point.The inspector will start, connect to your server, and begin exchanging messages. If you go to the localhost url, you can interact with it:Connection: You'll see initialize messages confirming the connection. List tools: Use the inspector's interface to ask the server what tools it offers. You should see only our hello_world tool listed. List resources/prompts: If you try to go to the resources or prompts tabs, they should be unclickable, since we commented out their registrations. Call the tool: Use the inspector to call the hello_world tool. You should see the server respond with our custom message.The MCP Inspector is your best friend during development. After each step where you add or modify a capability (tool, resource, or prompt), verify that the server registers it correctly and responds as expected. The Inspector lets you test server functionality without involving a full AI client.Use the buddy system: anywhere you go, the MCP Inspector goes. (^ Live footage of you and the MCP Inspector.)Building out the real capabilitiesNow that we have the basic server running and know how to debug it with the Inspector, let's 1) grab some snacks, and 2) incrementally add the actual CSS Tutor features.Feel free to tweak the capabilities as we go along—all coding skills are welcome!First, let's activate and understand the tool that fetches external information.In src/index.ts, remove the dummy hello_world tool definition we added earlier, and uncomment the line registerTools();. This line calls the function in src/tools/index.ts that registers all our tools.export const server = new McpServer({ name: "css-tutor", version: "0.0.1", capabilities: { prompts: {}, resources: {}, tools: {} } }); // registerPrompts(); // registerResources(); registerTools(); // delete dummy tool async function main() // rest of codeNow, open src/tools/index.ts and find the registerGetLatestUpdatesTool function. This is where the get_latest_updates tool is defined and registered with our server.Inside this file, you'll see a few key things happening:Configuration & safety check: It uses dotenv to load environment variables, specifically looking for OPENROUTER_API_KEY. If the key is missing, it logs a warning and skips registration, preventing the server from offering a tool that can't function. Tool registration: It uses server.tool() to register the get_latest_updates tool. This includes giving it a name, a description for the AI client, and defining its input schema (in this case, {} because it takes no arguments). Core logic (Handler): The core logic is in the asynchronous handler function passed to server.tool(). This handler is responsible for: Activation: Finally, the main registerTools function (at the bottom of the file) ensures that registerGetLatestUpdatesTool() gets called when the server starts up.Compile the changes.npm run buildTo test this tool with the Inspector, the server process needs the API key. Prefix the inspector command:# Example on Linux/macOS OPENROUTER_API_KEY="sk-or-..." npx @modelcontextprotocol/inspector node ./build/index.js(See the project's README.md for Windows examples).Run the MCP Inspector. Use tools/list. You should now see get_latest_updates registered. Try calling the tool via the Inspector—it should return recent CSS news! (As long as you have ~$0.04 in credits from OpenRouter available.) Now, let's activate the components that allow our server to remember information across interactions: the css_knowledge_memory resource and the tools to interact with it.Back in our main file (src/index.ts) uncomment the line registerResources();.Open up src/resources/index.ts and find the registerCssKnowledgeMemoryResource function.Registration: It uses server.resource() to define the css_knowledge_memory resource. This gives it a name, a unique URI (memory://...), read/write permissions, and an asynchronous handler function. Core logic (handler & helpers): The handler function is called when a client wants to read the resource's current state. It uses helper functions (readMemory, writeMemory also defined in this file) which handle the actual file system operations: reading, parsing, validating (with Zod), stringifying, and writing data to the data/memory.json file. This file acts as our persistent memory store. Activation: The main registerResources function (at the bottom of the file) ensures that registerCssKnowledgeMemoryResource() gets called when the server starts.Next, head on over to src/tools/index.ts and look at the registerReadFromMemoryTool and registerWriteToMemoryTool functions within src/tools/index.ts. These provide the actions clients can take related to the memory resource.Registration: Both tools are registered using server.tool(). read_from_memory has no specific input schema, while write_to_memory defines an input schema using Zod ({ concept: z.string(), known: z.boolean() }) to ensure clients send the correct data format for updates. Core logic (handlers): The read_from_memory tool's handler simply calls the imported readMemory() helper from src/resources/index.ts and returns the current state. The write_to_memory tool's handler receives validated arguments ({ concept, known }), then uses both readMemory() and writeMemory() helpers to load the current state, update it based on the input, and save the modified state back to data/memory.json. Activation: The main registerTools function ensures these tool registration functions are called.Compile the changes.npm run buildRun the MCP Inspector.In the Resources tab, you should now see css_knowledge_memory registered. In the tools tab, you should see get_latest_updates (from Step 1) plus the new read_from_memory and write_from_memory tools. Verify the statefulness: Use the Inspector to call read_from_memory, then write_to_memory with some test data (e.g., { "concept": "Grid", "known": true }), and finally call read_from_memory again. Confirm that the data returned by the second read reflects the change you wrote, and check the data/memory.json file directly to see the persisted update.Last step! Time to tell the AI model how to use the tools and resource we’ve provided.In src/index.ts, uncomment the last commented-out line, registerPrompts();.Open src/prompts/index.ts.Registration: The registerCssTutorPrompt function uses server.prompt() to define the css-tutor-guidance prompt, giving it a name and description for the client. It specifies no input schema ({}) because calling this prompt doesn't require any arguments from the client. (We could pass dynamic data here, which can get pretty spicy.) Core Logic (Handler & Content): The handler for this prompt is very simple. It just returns the content of the cssTutorPromptText constant (defined in the same file), which contains the detailed instructions for the AI on how to behave like a CSS tutor using the available tools and memory. Activation: The main registerPrompts function (at the bottom of the file) makes sure registerCssTutorPrompt() gets called when the server starts.Compile the changes.npm run buildRun the MCP Inspector.In the Prompts tab, you should now see css-tutor-guidance registered. Try calling the prompt from the Inspector. It should display the full guidance text defined in cssTutorPromptText.Pretty cool, right? Well, here’s the thing: Even though the server now offers the prompt via MCP, most clients can’t automatically use MCP prompts. Claude will need you to pick it manually, and Cursor can’t even access MCP prompts yet.So, for now, rely on features like Cursor rules to provide instructions on how to use certain MCP servers. Hopefully, we’ll see more MCP adoption soon.Running the server with real clientsWith our server fully built and debugged using the Inspector, it's time to connect it to actual AI clients.If you use the Claude desktop application:Go to Settings. Not the settings near your profile (for some reason?), but the actual app found in the top toolbar. Go to “Developer” → “Edit Config” Add an entry for the css-tutor server: Replace the absolute path (not relative!) and API key with your actual values. Restart Claude Desktop, and connect to the css-tutor server. (See video up top for where to press.)If you use the Cursor editor:Go to Cursor Settings > MCP > Add new global MCP server. Configure the server the exact same as in the Claude steps above. Create prompt rule: Cursor doesn't automatically use the server's MCP prompt, so to create a rule, go to Cursor Settings > Rules and add a new Project Rule with the pasted prompt from the server. Activate the rule: When chatting or generating code (e.g., Cmd+K) in Cursor within this project, you need to @mention the rule, and then Cursor’s agent can use the server as intended without further guidance.Et voilà!You should now be able to recreate the demo video scenario, chatting with one client and then moving to the other whenever you want.Next stepsFirst, pat yourself on the back. New skills are awesome.Second, think about the implications.This CSS Tutor example is simple by design, to help you learn the power of MCP as quickly as possible—but imagine what you could do with some real tools.Maybe you want:More sophisticated state: Replace the JSON file with a proper database (like SQLite or PostgreSQL) for multi-user support or larger datasets. Additional tools: Add tools to search specific documentation sites (like MDN), fetch CSS examples from Codepen, or even analyze a user's local CSS file. Dynamic prompts: Instead of a static prompt, make the guidance adapt based on the user's known concepts stored in the resource. Error handling and rerouting: Add more granular error handling, especially for external API calls, and reroute logic when one service is down. Different Transports: Explore other transport options besides StdioServerTransport if you need network-based communication—e.g., Server-Sent Events (SSE) for streaming.MCP provides a pretty powerful framework to make whatever you want. By building MCP servers, you can make tailored, stateful, and future-proof integrations that connect to any new assistant that speaks the protocol.Happy building! Introducing Visual Copilot: convert Figma designs to high quality code in a single click.Try Visual CopilotGet a demo
    0 Комментарии 0 Поделились 35 Просмотры
  • X.COM
    AMA at 9:30am PT with Head of Model Behavior @joannejang to talk all things ChatGPT's personality. https://www.reddit.com/r/ChatGPT/comments/1kbjowz/a...
    AMA at 9:30am PT with Head of Model Behavior @joannejang to talk all things ChatGPT's personality.https://www.reddit.com/r/ChatGPT/comments/1kbjowz/ama_with_openais_joanne_jang_head_of_model/OpenAI: We’ve rolled back last week's GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.More on what happened, why it matters, and how we’re addressing sycophancy: https://openai.com/index/sycophancy-in-gpt-4o/
    0 Комментарии 0 Поделились 38 Просмотры
  • WWW.PCWORLD.COM
    Best laptops: Our experts pick the top 12 models
    Choosing a laptop doesn’t have to be an overwhelming process. Whether you’re diving into the latest games, tackling school assignments, or just casually browsing the web, the key is finding a laptop that fits in with your lifestyle and routine. We’ve reviewed the best laptops in every category, from lightweight Chromebooks to powerful gaming rigs and everything in between. Our goal is to make your decision ease, with clear recommendations based on what actually matters in life. Dell Inspiron 14 Plus (2024) – Best laptop overall Pros Strong performance Exceptional battery life Wonderful typing experience Cons CPU throttles under heavy loads No user upgrades Who should buy the Dell Inspiron 14 Plus? If you want a laptop that truly does it all and does it well, the Dell Inspiron 14 Plus is the one to beat. This laptop nails the essentials with style, speed, and stamina. It’s a top pick for anyone who needs dependable performance without being tethered to an outlet all day. The 14-inch form factor also hits the sweet spot between portability and screen space, making it perfect for either work or play. One of the biggest selling points is the seriously impressive 17-hour battery life. Whether it’s a long study session or a long workday, this laptop will power along with you. And with a price tag around $1,000, it delivers incredible value for the performance you’re getting. Beyond the long battery life, the 14-inch 2560×1600 display comes with an anti-glare coating and a peak brightness of 418 nits, making it comfortable to use in different lighting environments. Dell Inspiron 14 Plus: Further considerations The conservative design might not appeal to users looking for more pizzazz. While integrated graphics are fine for daily use, power users may want to look elsewhere for a laptop can handle heavier workloads. For most users though, this laptop ticks nearly every box. Read our full Dell Inspiron 14 Plus review Asus Zenbook 14 OLED – Best OLED laptop Pros Attractive OLED touchscreen Good CPU and integrated GPU performance Outstanding battery life Cons Blah design Keyboard isn’t memorable Mediocre connectivity options Who should buy the Asus Zenbook 14 OLED? Anyone would be happy with the Asus Zenbook 14 OLED–it nails the vital aspects, especially in the display and battery departments. The 14-inch 1920×1200 OLED panel is deliciously vivid, delivering rich colors and deep contrast, which is great for creators and editors. It’s also fast (thanks to the Intel Core Ultra 7 155H processor) and lightweight (2.82 pounds), and the 75 watt-hour battery churned out 17 hours of charge. That’s not bad for the $850 price tag. It’s a fantastic notebook that would work great for anyone, especially if you want vivid visuals from an OLED panel. Asus Zenbook 14 OLED: Further considerations The Asus Zenbook 14 OLED would have been our top pick, but it fell short in a few areas. For instance, the port selection is more limited–no Ethernet and fewer USB-A’s. The reflective display also makes it harder to use outdoors or in bright rooms. Finally, the Dell Inspiron Plus 14 (our current top pick) has slightly better battery life and performance. Read our full Asus Zenbook 14 OLED review Acer Aspire Go 15 – Best budget laptop Pros Affordable Decent battery life Good display visibility Cons Big and bulky Cheap build Limited performance Who should buy the Acer Aspire Go 15? The Acer Aspire Go 15 is the must-have laptop for budget-conscious buyers that just need the basics. The Intel Core i3-N305 processor handles everyday tasks like browsing and word processing with ease. Battery life is also close to 12 hours on a single charge. The appeal mostly lies in its value, though. While more expensive laptops nail the polish and the speed, the budget variety is strictly about what’s functional and that’s exactly what you’re getting here. It’s a good option for students or anyone seeking a reliable, no frills machine under $500. You’ll also find a surprisingly generous port selection on the Acer Aspire Go 15–USB-A on both sides, a USB-C, an HDMI, a 3.5mm headphone jack, and a Kensington lock. That’s more than what some laptops get twice the price. Acer Aspire Go 15: Further considerations Like many budget-friendly laptops, the Aspire Go 15 comes with a few trade-offs. The plastic chassis helps keep the cost down, and while it weighs a bit over four pounds, it’s still manageable for day-to-day portability. The 1920×1080 display is also pretty dim (250 nits), so it’s better suited for indoor use due to its 250 nit brightness, but it still delivers sharp visuals for everyday tasks. That said, if you’re looking to get solid utility at a great price, the Acer Aspire Go 15 is the total package. Read our full Acer Aspire Go 15 (2024) review Lenovo ThinkPad T14s Gen 6 – Best battery life Pros Remarkable battery life Sturdy, lightweight design High-visibility display Cons Variable performance trails competitors A bit pricier than the competition Who should buy the Lenovo ThinkPad T14s Gen 6? The Lenovo ThinkPad T14s Gen 6 is a great option for anyone who needs a reliable, long-lasting laptop. Weighing just 2.66 pounds and offering an incredible battery life of nearly 24 hours, it’s ideal for people who are always on the move. Plus, with a Snapdragon X Elite processor running the show, it offers the perfect blend of portability, endurance, and capable everyday performance. The build quality is also standout, with the chassis being notably sturdy, and the keyboard offers a delightfully tactile typing experience ThinkPads are known for. Lenovo ThinkPad T14s Gen 6: Further considerations The one area where this laptop falls a bit short is the display. While the 1920×1200 IPS screen is perfectly usable for productivity, it lacks the richness and contrast of an OLED panel. So if you’re doing color-sensitive creative work, you may want to look elsewhere. But if long battery life and portability matter more to you, then the ThinkPad T14s is the way to go. Read our full Lenovo ThinkPad T14s Gen 6 review Asus Chromebook Plus CX34 – Best Chromebook Pros Zippy processor performance Nice keyboard A wide array of connectivity options Chic design Cons Battery life isn’t competitive The display’s 16:9 aspect ratio feels a little cramped Who should buy the Asus Chromebook Plus CX34? The Asus Chromebook Plus CX34 is great for everyday users looking for a reliable yet stylish device. It stands out as the best overall Chromebook because it offers a harmonious marriage of performance, design, and affordability. Inside you’ll find an Intel Core i5-1335U processor, 8GB of RAM, and 128GB of SSD storage–in other words, it efficiently handles everyday tasks. The 14-inch 1080p display delivers sharp visuals, and the laptop includes a 1080p webcam for those web conferencing calls. The chic pearl colorway also adds a nice touch of elegance, making it suitable for personal or professional environments. Asus Chromebook Plus CX34: Further considerations While the Asus Chromebook Plus CX34 offers smooth performance and a pretty design, there are minor trade-offs to be aware of like the non-competitive battery life (13 hours) and the lack of a touchscreen. Read our full Asus Chromebook Plus CX34 review MacBook Air (M3) – Best MacBook Pros Excellent battery life 256GB SSD is now two NAND chips, maintaining performance Cons Expensive memory upgrades Dual external display support requires closed lid Who should buy the MacBook Air (M3)? The MacBook Air (M3) is a stellar option for anyone who wants a premium macOS experience without paying MacBook Pro prices. Starting at $1,299, it delivers fast performance for everyday tasks, light creative work, and multitasking–all in a fanless design that runs silent even under load. The Apple M3 chip brings performance on par with the base MacBook Pro (M3) model and, in testing, the battery lasted up to 19 hours on a single charge. If you’re looking for a powerful yet quiet macOS laptop with plenty of endurance to spare, this one hits the sweet spot. The 15-inch Liquid Retina (2880×1864 resolution) display doesn’t quite match the brightness or contrast of the mini-LED panel found in the MacBook Pro, but it’s still sharp and vibrant. The Air is also impressively thin and lightweight (3.3 pounds!), so it’s pretty darn portable. MacBook Air (M3): Further considerations If you’re after high-end performance for professional level workloads like 3D rendering or heavy video editing, a MacBook Pro with active cooling might be a better fit. However, for most users, the MacBook Air (M3) delivers good performance, long battery life, and an elegant design. Read our full MacBook Air (M3) review Lenovo Legion 5i – Best gaming laptop Pros Great GeForce RTX 4060 performance Solid build quality Nice cooling and vent positioning Cons Display is a little dim Who should buy the Lenovo Legion 5i? The Lenovo Legion 5i is a solid mid-range pick for gamers who want strong gaming performance and a fast display. With an Intel Core i9-14900HX CPU and an Nvidia RTX 4060 GPU under the hood, it delivers the power needed for smooth gameplay as well as lightning-fast load times. The spacious 16-inch (2560×1600 resolution, 165Hz refresh rate) is another highlight. It offers crisp visuals and fluid motion, which is perfect for immersive single-player games and competitive gaming. In addition to it being a fast display, it also produces vibrant colors. For the price ($1,399 as tested), it offers an impressive balance of performance and value. Lenovo Legion 5i: Further considerations The Lenovo Legion 5i doesn’t just bring blazing-fast performance to the table, it also includes thoughtful features like a 1080p webcam with an electronic shutter switch and a full-size keyboard with a number pad and four zones of LED lighting. The webcam is perfect for streaming and the number pad allows for quick access to numeric inputs. While this laptop hits the mark for most gamers, for those that want more graphics firepower and higher frame rates, we’d recommend springing for a laptop with RTX 4070 graphics or higher. Read our full Lenovo Legion 5i Gen 9 review Acer Nitro V 16 – Best budget gaming laptop Pros Solid performance Fast 165Hz display with good colors Cons Fully plastic build Mushy keyboard Who should buy the Acer Nitro V 16? The Acer Nitro V 16 is a fantastic pick for gamers who want good performance and a fast display without breaking the bank. With an Nvidia RTX 4060 GPU, an AMD Ryzen 7 8845HS CPU, and a 16-inch 1920×1200 IPS display running at 165Hz, it can comfortably handle most modern games on High settings. Acer Nitro V 16: Further considerations The Nitro V 16 really embraces the gaming aesthetic, with its angular lines and backlit keyboard that glows like embers in a fireplace. If you’re into that bold style, you’ll love it, but if you prefer a more understated design, it might not be for you. Battery life is also limited to about four hours, which is typical for a laptop in this category, but it’s something to keep in mind. Bottom line? If your priority is strong gaming performance at an affordable price, the Nitro V 16 offers a lot of bang for your buck. Read our full Acer Nitro V 16 review Asus ProArt P16 – Best content creation laptop Pros Big touchpad with virtual scroll wheel Gorgeous 4K OLED display More connectivity than the competition Good battery life Cons Chassis is light, but doesn’t look remarkable CPU performance falls behind the best Can get hot under load Who should buy the Asus ProArt P16? The Asus ProArt P16 is a top-tier choice for creative professionals and prosumers who prioritize display quality, connectivity, and performance. With its stunning 16-inch (3840×2400 resolution, 16:10 aspect ratio), it’s definitely well suited to photo editing tasks and video production. It also boasts an Intel Core i9-13980HX CPU and an Nvidia GeForce RTX 4070 GPU, a powerhouse combination that delivers desktop-class performance. Battery life is another strong point, with the ProArt P16 lasting over nine hours on a single charge. It also includes USB 4.0 support via one of its USB-C ports, which enables speeds up to 40Gbps, making high-speed transfer possible. Asus ProArt P16: Further considerations Performance and display quality are the standout features, but there’s something compelling about the understated design. Some may find the all-black chassis too plain while others find it elegant. Under more demanding workloads, the chassis can also run a bit warm. Still, the ProArt P16 is an excellent fit for anyone that needs a serious workhorse. Read our full Asus ProArt P16 review Asus Zenbook S 14 – Best ultraportable Pros Transcendent battery life Large OLED screen Great audio Cons Keyboard needs more key travel Performance needs improvement Best Prices Today: Retailer Price Asus $1499.99 View Deal Price comparison from over 24,000 stores worldwide Product Price Price comparison from Backmarket Who should buy the Asus Zenbook S 14? The Asus Zenbook S 14 is a standout ultraportable for those who want a lightweight design, all-day battery life, and premium display quality. Weighing just 2.65 pounds–lighter than the 13-inch MacBook Air–it’s a great pick for regular travelers and commuters. Despite its slim build, this laptop delivers surprising endurance. Its 73 watt-hour battery lasted an impressive 21 hours in testing and it comes paired with a vibrant 14-inch (2880×1800 resolution, 120Hz refresh rate) OLED display. Asus Zenbook S 14: Further considerations The Asus Zenbook S 14 offers more than just its slender build and extended battery life. The built-in audio is a pleasant surprise, as it delivers rich, clear sound, making it a solid option for media consumption. While it’s not exactly a powerhouse for resource intensive creative work, the Zenbook S 14 really excels in portability and endurance. Read our full Asus Zenbook S 14 (UX5406SA) review HP OmniBook Ultra Flip 14 – Best 2-in-1 laptop Pros OLED touchscreen looks great Nice solid build Long battery life Cons Few ports in odd spots Expensive  Who should buy the HP OmniBook Ultra Flip 14? If you’re in the market for a premium 2-in-1, the HP OmniBook Ultra Flip 14 is one of the best options available today. It’s perfect for anyone who wants the flexibility of a convertible form factor and the reliability of a long-lasting machine. The 360-degree hinge feels both sturdy and smooth, and the 14-inch 2880×1800 OLED touchscreen is vibrant and bright enough (500 nits!) for indoor use. The battery life is exceptional as well–lasting up to 17.5 hours on a single charge. Typing on the OmniBook Ultra Flip 14 is a real pleasure thanks to its responsive keyboard and clearly labeled keys. Not only do they have satisfying travel, but the bold letting improves visibility. These minute details really enhance day-to-day usability. HP OmniBook Ultra Flip 14: Further considerations The HP OmniBook Ultra Flip 14 is a premium product, there’s no doubt about that. The design is an interesting blend of sustainability and durability–the chassis is made of 85 percent PIR metal and five percent of PCR (post-consumer recycled) material is in the top cover and keyboard deck. That said, we’ve got one minor nitpick. The port placement is bit unconventional, with both USB-C ports located at the rear corners of the machine. It’s just something to be aware of, as the port placement may not suit every setup. Read our full HP OmniBook Ultra Flip 14 review Framework Laptop 13 – Best laptop for upgrading Pros Customizable, repairable, and upgradeable Detailed repair documentation Cons On the expensive side for the specs Not the best battery life Who should buy the Framework Laptop 13? The Framework Laptop 13 is an awesome choice for tech-savvy users who value reparability and long-term sustainability. If you want a laptop you can upgrade over time rather than replace, this is one of the most compelling options you can find right now. Nearly every component of the Framework Laptop 143 is modular and user-replaceable. Each part is labeled with a QR code linking directly to guides and replacement listings on Framework’s website. You can even configure the port layout using swappable Expansion Cards, which are small rectangular modules that slide into the chassis like Lego pieces, allowing you to mix USB-C, HDMI, and so on. The Framework Laptop 13 is also surprisingly lightweight (2.87 pounds) for a laptop with this level of flexibility. It also handles general productivity tasks with ease thanks to the Intel Core Ultra 7 155H processor. Framework Laptop 13: Further considerations Performance is more mid-range than high-end and the pricing can feel steep when compared to traditional laptops with similar specs. The value here lies in its longevity–you’re buying a laptop that can evolve and change over time rather than a device with a set expiration date. Read our full Framework Laptop 13 review Other products tested While these laptops didn’t make PCWorld’s top picks list, they’re still noteworthy options that may appeal to certain folks. The Asus Zenbook A14 impressed us with its vibrant OLED touchscreen, robust build quality, and amazing battery life. For environmentally conscious buyers, the Acer Aspire Vero 16 stands out with a chassis made from PCR and other bio-based materials. Finally, if you’re someone that’s always on the go, the Samsung Galaxy Book5 Pro offers a 16-inch 2880×1800 AMOLED 120Hz touchscreen and an impressive 23 hours of battery life. How we test laptops The PCWorld team puts each and every Windows laptop through a series of benchmarks that test GPU and CPU performance, battery life, and so on. The idea is to push the laptop to its limits and then compare it against others we’ve tested. Chromebooks, on the other hand, go through a series of web-based tests. For a much deeper look at our review methodology, check out how PCWorld tests laptops. Why you should trust PCWorld for laptop reviews and buying advice It’s in our name! PCWorld prides itself on laptop experience and expertise. We’ve been covering PCs since 1983, and we now review more than 70 laptops every year. All of the picks below have been personally tested and vetted by our experts, who’ve applied not only performance benchmarks but rigorous usability standards. We’re also committed to reviewing PC laptops at every price point to help you find a machine that matches your budget. Who curated this article? This article was curated by Ashley Biancuzzo, who oversees all of PCWorld’s laptop and Chromebook review coverage. Ashley has been immersed in the ever-changing world of consumer technology and brings a keen editorial eye to every review. She specializes in evaluating laptops across a wide range of categories–from budget-friendly models to high-end powerhouses. How to choose the best laptop What form factor is best for a laptop? Traditional clamshells are great for general use while 2-in-1 convertibles offer flexible designs with displays that rotate 360 degrees. Chromebooks, on the other hand, are a budget-friendly option that are best for everyday web-based tasks. How much processing power do you need? It depends on your workload. For everyday use, an Intel i5 (11th gen or later) or AMD Ryzen 5 (4000 series or later) is solid. If you’re into creative tasks like video editing, go for an Intel i7/i9 or Ryzen 7/9. For 4K video editing or heavy multitasking, a Ryzen 9 is ideal. Discrete graphics vs. integrated graphics? If you’re into gaming or video editing, you’ll want discrete graphics (like Nvidia or AMD cards) for better performance. For basic tasks like browsing or streaming, integrated graphics will do just fine. How much RAM? 8GB of RAM is zippy enough for general use. If you’ve got a gaming laptop, 16GB of RAM is the way to go, with 32GB being a future-proof configuration. Content creators will want as much as possible. What’s the right display size? If you’re a video editor or someone who does a lot of multimedia work, you’ll want a display that’s anywhere from 15- to 17-inches. The sweet spot is anywhere from 13- to 14-inches, though. The bigger the display, the heavier your laptop is going to be. A 13- or 14-inch display is the best in terms of portability and value. Battery-life expectations If you plan on taking your laptop anywhere with you, aim for something that can last 10 to 12 hours on a single charge. That’s more than a full workday, so it should theoretically get you through long flights or a day of classes. That said, many of the newest Snapdragon-powered Windows laptop are pushing well past that number, with one of them offering up to 24 hours of battery life on a single charge–this is due to the chip’s ultra-efficient Arm-based architecture. Just know that the bigger the battery, the heavier the laptop. Read our roundup of the best laptop chargers. Laptop pricing guide Many good laptops cost around $500 to $750, but the price really depends on your budget. If you’re strapped for cash (been there, trust me), go for a Chromebook or an entry-level business laptop. You can find solid options for under $500. Spending $750 to $1,000 can get you better displays, additional performance, more storage, and nicer designs. If you splurge for a laptop that costs over $1,000, you’re usually paying up for premium build quality, great extras, and top-shelf performance. Gaming laptops are different. You can sometimes find gaming laptops with entry-level discrete graphics on sale for around $850, but you’ll usually need to spend at least $1,000 for a system with decent 1080p gaming chops. You can pay more — often much more — for better graphics firepower and nicer displays, but the costs can rise rapidly depending on your hardware of choice. Some fully loaded gaming laptops can go for multiple thousands of dollars but you’re getting the equivalent of a desktop replacement in return. Spending $1,200 to $2,000 usually gets you a very good gaming laptop. Don’t forget the ports A wide array of ports is always a plus in my book, as it eliminates the need for an adapter. I’d recommend a laptop that has both USB-C and USB-A. An HDMI port is good, too. This is especially useful when you want to hook up to an external monitor. FAQ 1. What is the best laptop? The Dell Inspiron 14 Plus (2024) stands out as the best overall choice for most people. Priced at around $1,000, it delivers reliable performance, exceptional battery life (17 hours on a single charge!), and a vibrant 14-inch 1400p display. 2. What is the best cheap laptop? The Acer Aspire Go 15 is PCWorld’s top budget pick because of its reliable performance and low price point. It features an Intel Core i3-N305 processor, a sharp 1080p display, and surprisingly good battery life. 3. What is the best gaming laptop? The Lenovo Legion 5i exhibits a fantastic balance of performance and value. Powered by an Intel Core i9-14900HX CPU and an Nvidia RTX 4060 GPU, it delivers top-tier gaming performance, easily crushing demanding titles like Metro Exodus with an average of 41 frames-per-second. The 16-inch IPS display boasts a 2560×1600 resolution and a 165Hz refresh rate, meaning you can expect smooth gameplay. While the display isn’t as vibrant as an OLED panel, it still offers a great picture at a competitive price. 4. When is the best time to buy a laptop?  The best time to buy a laptop usually falls during major sales events like Black Friday and Cyber Monday. Back-to-school season (late summer to early fall) is also a great time to buy a laptop, as many retailers target students. 5. What is a 2-in-1 laptop? A 2-in-1 laptop (also known as a convertible) is a device that combines the functionality of a traditional laptop (also known as a clamshell laptop) with the versatility of a tablet. These laptops feature a touchscreen display that can fold back, rotate, or detach. It’s ideal for those who need a full keyboard for productivity and a tablet for browsing the web or doodling. They’re pretty popular among students and creatives who want the best of both worlds.
    0 Комментарии 0 Поделились 36 Просмотры
  • WWW.TOMSHARDWARE.COM
    Save $500 on this Nvidia RTX 5090 edition Alienware Area-51 gaming PC
    Dell's Alienware Area-51 gaming PC with Nvidia GeForce RTX 5090 has $500 slashed off the list price.
    0 Комментарии 0 Поделились 37 Просмотры
  • WWW.NEOWIN.NET
    Microsoft reports strong Q3 FY2025 results, revenue reaches $70.1 billion
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft reports strong Q3 FY2025 results, revenue reaches $70.1 billion David Uzondu Neowin · Apr 30, 2025 18:26 EDT Microsoft just wrapped up its third quarter of fiscal 2025 with another solid flex, beating expectations yet again. Revenue hit $70.1 billion, up 13% from last year (or 15% if you factor out currency swings), while earnings per share climbed 18% to $3.46. Azure and the broader cloud services arm clocked in with the same 33% growth rate as Q1 (35% when adjusted for currency), pulling ahead of Q2’s 31% and reinforcing a steady upward trend. CEO Satya Nadella pointed to AI as the game-changer here. Businesses are leaning harder on Microsoft’s AI stack to work smarter and trim costs, and it’s showing in the numbers. The Productivity and Business Processes unit, home to Microsoft 365 and Dynamics 365, pulled in $29.9 billion, a 10% lift from last year (13% with currency adjustments). Microsoft 365 Commercial posted an 11% increase, while Dynamics 365 outpaced it with 16%. LinkedIn is still growing, but at a slower 7%. As for the More Personal Computing division, it brought in $13.4 billion, up 6%. Windows OEM crept up 3%, Xbox content and services added 8%, and search and news ads were the surprise standout, jumping 21% once you strip out traffic costs. These gains helped keep the segment in the green, with Windows and Xbox pulling more than their weight. Financially, Microsoft converted its top-line strength into $37 billion of operating cash flow, even as it invested $16.7 billion in property and equipment to expand its datacenter footprint for AI. CFO Amy Hood highlighted that Microsoft Cloud alone generated $42.4 billion this quarter, up 20% (22% in constant currency), fueling both research and development and shareholder returns. The company returned $9.7 billion to shareholders through dividends and share buybacks. Looking forward, while this quarter's results look solid, Redmond made sure to mention that the road ahead has its potential bumps. It pointed out that actual results could differ materially from expectations because of things like "intense competition in all of our markets," the simple fact that "significant investments in products and services may not achieve expected returns," and the ever-present threat of "cyberattacks and security vulnerabilities." The company also noted that changes in "government enforcement under competition laws" or "laws and regulations relating to the handling of personal data" could impact its business. More details on Microsoft's third-quarter FY2025 results are available on the company's official website. Tags Report a problem with article Follow @NeowinFeed
    0 Комментарии 0 Поделились 53 Просмотры
  • SLASHDOT.ORG
    Microsoft Puts Brakes on AI Spending as Profit Increases 18%
    After 10 consecutive quarters of rising AI-related investment, Microsoft has put on the brakes, spending over $1 billion less than the previous quarter (source paywalled; alternative source). Despite the slight slowdown, Microsoft posted stronger-than-expected results with $70 billion in revenue and $25.8 billion in profit. The New York Times reports: In the first three months of 2025, Microsoft spent $21.4 billion on capital expenses, down more than $1 billion from the previous quarter. The company is still on track to spend more than $80 billion on capital expenses in the current fiscal year, which ends in June. But the pullback, though slight, is an indication that the tech industry's appetite for spending on A.I. is not limitless. Overall, Microsoft's results showed unexpected strength in its business. Sales surpassed $70 billion, up 13 percent from the same period a year earlier. Profit rose to $25.8 billion, up 18 percent. The results far surpassed Wall Street's expectations. "Cloud and A.I. are the essential inputs for every business to expand output, reduce costs, and accelerate growth," Satya Nadella, Microsoft's chief executive, said in a statement. Read more of this story at Slashdot.
    0 Комментарии 0 Поделились 54 Просмотры