• Tesla sales drop 35% in San Diego County
    fox5sandiego.com
    submitted by /u/chrisdh79 [link] [comments]
    0 Yorumlar ·0 hisse senetleri ·36 Views
  • Buy now, pay later . . . for a burrito?
    techcrunch.com
    In BriefPosted:2:56 PM PDT March 23, 2025Buy now, pay later . . . for a burrito?In 2010, a programmer who was mining bitcoin famously made the comically expensive mistake of spending 10,000 bitcoin on two pizzas. As of this writing, those coins would be worth $850 million dollars.While there are few comparisons to that kind of miscalculation, the prospect of adding interest payments to fast-food orders is raising concerns nonetheless. Stemming from a partnership announced earlier this week between DoorDash and Klarna, customers can now buy a burrito or McDonalds order and pay for it later across four interest-free payments.The deal provides diners who spend at least $35 more flexibility, say both companies. But customers who defer payment on a fast-food delivery are at significantly higher risk of missing one of those interest-free installment payments.Indeed, to some, the new partnership is yet another troubling economic sign of the times. Says Chuck Bell of Consumer Reports to the New York Times: If you dont pay the bill on time and you start getting multiple late fees, it could end up being a very expensive chile relleno or pad Thai.Topics
    0 Yorumlar ·0 hisse senetleri ·31 Views
  • The AirPods we recommend the most are on major sale before Amazon's Spring Sale
    www.zdnet.com
    The AirPods Pro 2 are our pick for the best AirPods you can buy -- and they're on sale for the lowest price we've seen in several months.
    0 Yorumlar ·0 hisse senetleri ·36 Views
  • New Pixel 10 Pro Details Confirm Googles Powerful Upgrade
    www.forbes.com
    Pixel 9 Pro XLEwan SpenceFollowing the Pixel 9a launch, Google will focus on the next version of Android and the upcoming Pixel 10 and Pixel 10 Pro family. This weekend saw its hardware and software strands cross over with the promise of faster code and improved performance.Pixel 10 Pro Software UpdatedThe new details come from within the Android Open Source Project codewhich manufacturers use to build their own Android versions, including Google.Thanks to the notes left by a Google Engineer alongside the code chain, we know more about the gains to expect in the Pixel 10 family. In part of the parallel module loading routine, more performance has been found, which triggers when a device boots up: Test: Pixel 10 reduces 30% loading time, and Pixel Fold reduces 25%."Given the Pixel 9, 9 Pro, and 9 Pro XL all ran the same Tensor G4 Mobile chipset when they were launched last year; its safe to assume that the new Tensor G5 will be standard across the new devices, in which case that 30% increase is likely to be across the board.Faster Pixel 10 Pro BootingThe note that there is a speed increase, but not as high, could point out two angles. The first is that the code is running on the older Pixel Fold, which wasnt branded with a model number when it was released in 2023. By the same read, it could be last years Pixel 9 Pro Fold. Im more inclined to think this is the presumptively named Pixel 10 Pro Fold, and the engineer is simply using 10 for the whole family and Fold for the outlier. Given the smaller internal footprint of a foldable device, which can lead to physical performance constraints, the lower gain does feel reasonable.Pixel 10 Pro Speed For AllGiven these commits are to the main branch of AOSP, these benefits will likely cascade through the entire Android ecosystem over the next 12 to 18 months. However, with the Pixel handsets typically the first to ship with the latest version of Android, Google will once more use the Pixel range to show where it believes the platform should be moving.Now read the latest Pixel 10 Pro, Galaxy S26 and Android smartphone headlines in Forbes weekly news digest...
    0 Yorumlar ·0 hisse senetleri ·35 Views
  • The Gaping Hole In Todays AI Capabilities
    www.forbes.com
    "Once you stop learning, you start dying." Albert EinsteinSource: WikipediaThe pace of improvement in artificial intelligence today is breathtaking.An exciting new paradigmreasoning models based on inference-time computehas emerged in recent months, unlocking a whole new horizon for AI capabilities.The feeling of a building crescendo is in the air. AGI seems to be on everyones lips.Systems that start to point to AGI are coming into view, wrote OpenAI CEO Sam Altman last month. The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases and can fully realize our creative potential.Or, as Anthropic CEO Dario Amodei put it recently: What I've seen inside Anthropic and out over the last few months has led me to believe that we're on track for human-level AI systems that surpass humans in every task within 23 years.Yet todays AI continues to lack one basic capability that any intelligent system should have.Many industry participants do not even recognize that this shortcoming exists, because the current approach to building AI systems has become so universal and entrenched. But until it is addressed, true human-level AI will remain elusive.What is this missing capability? The ability to continue learning.What do we mean by this?Todays AI systems go through two distinct phases: training and inference.First, during training, an AI model is shown a bunch of data from which it learns about the world. Then, during inference, the model is put into use: it generates outputs and completes tasks based on what it learned during training.All of an AIs learning happens during the training phase. After training is complete, the AI models weights become static. Though the AI is exposed to all sorts of new data and experiences once it is deployed in the world, it does not learn from this new data.In order for an AI model to gain new knowledge, it typically must be trained again from scratch. In the case of todays most powerful AI models, each new training run can take months and cost hundreds of millions of dollars.Take a moment to reflect on how peculiarand suboptimalthis is. Todays AI systems do not learn as they go. They cannot incorporate new information on the fly in order to continuously improve themselves or adapt to changing circumstances.In this sense, artificial intelligence remains quite unlike, and less capable than, human intelligence. Human cognition is not divided into separate training and inference phases. Rather, humans continuously learn, incorporating new information and understanding in real-time. (One could say that humans are constantly and simultaneously doing both training and inference.)What if we could eliminate the kludgy, rigid distinction in AI between training and inference, enabling AI systems to continuously learn the way that humans do?This basic concept goes by many different names in the AI literature: continual learning, lifelong learning, incremental learning, online learning.It has long been a goal of AI researchersand has long remained out of reach.Another term has emerged recently to describe the same idea: test-time training.As Perplexity CEO Aravind Srinivas said recently: Test-Time Compute is currently just inference with chain of thought. We havent started doing test-time-training - where model updates weights to go figure out new things or ingest a ton of new context, without losing generality and raw IQ. Going to be amazing when that happens.Fundamental research problems remain to be solved before continual learning is ready for primetime. But startups and research labs are making exciting progress on this front as we speak. The advent of continual learning will have profound implications for the world of AI.Workarounds and Half-SolutionsIt is worth noting that a handful of workarounds exist to mitigate AIs current inability to learn continuously. Three in particular are worth mentioning. While each of these can help, none fully solve the problem.The first is model fine-tuning. Once an AI model has been pretrained, it can subsequently be fine-tuned on a smaller amount of new data in order to incrementally update its knowledge base.In principle, fine-tuning a model on an ongoing basis could be one way to enable an AI system to incorporate new learnings as it goes.However, periodically fine-tuning a model is still fundamentally a batch-based rather than a continuous approach; it does not unlock true on-the-fly learning.And while fine-tuning a model is less resource-intensive than pretraining it from scratch, it is still complex, time-consuming and expensive, making it impractical to do too frequently.Perhaps most importantly, fine-tuning only works well if the new data does not stray too far from the original training data. If the data distribution shifts dramaticallyfor instance, if a model is presented with a totally new task or environment that is unlike anything it has encountered beforethen fine-tuning can fall prey to the foundational challenge of catastrophic forgetting (discussed in more detail below).The second workaround is to combine some form of retrieval with some form of external memory: for instance, retrieval-augmented generation (RAG) paired with a dynamically updated vector database.Such AI systems can store new learnings on an ongoing basis in a database that sits outside the model and then pull information from that database when needed. This can be another way for an AI model to continuously incorporate new information.But this approach does not scale well. The more new learnings an AI system accumulates, the more unwieldy it becomes to store and retrieve all of this new information in an efficient way using an external database. Latency, computational cost, retrieval accuracy and system complexity all limit the usefulness of this approach.A final way to mitigate AIs inability to learn continuously is in-context learning.AI models have a remarkable ability to update their behavior and knowledge based on information presented to them in a prompt and included within their current context window. The models weights do not change; rather, the prompt itself is the source of learning. This is referred to as in-context learning. It is in-context learning that, for example, makes possible the practice of prompt engineering.In-context learning is elegant and efficient. It is also, however, ephemeral.As soon as the information is no longer in the context window, the new learnings are gone: for instance, when a different user starts a session with the same AI model, or when the same user starts a new session with the model the next day. Because the models weights have not changed, its new knowledge does not persist over time. This severely limits the usefulness of in-context learning in enabling true continual learning.Moats, Moats, MoatsOne important reason why continual learning represents such a tantalizing possibility: it could create durable moats for the next generation of AI applications.How would this work?Today, OpenAIs GPT-4o is the same model for everyone that uses it. It doesnt change based on its history with you (although ChatGPT, the product, does incorporate some elements of persistent memory).This makes it frictionless for users to switch between OpenAI, Anthropic, Google, DeepSeek and so on. Any of these companys models will give you more or less the same response to a given prompt, whether youve had thousands of previous interactions with it or you are trying it for the first time.Little wonder that the conventional wisdom today is that AI models inevitably commoditize.In a continual learning regime, by constrast, the more a user uses a model, the more personalized the model becomes. As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Its neurons literally get rewired as it learns about you and about the things that matter to you. It gets to know you.Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncracies in real-time, thereby building an enduring relationship with you.(For a dramatized illustration of what continual learning might look likeand how different this would be from todays AIthink of the Samantha character in the 2013 film Her.)The impact of continual learning will be enormous in both consumer and enterprise settings.A lawyer using a legal AI application will find that, after a few months of using the application, it has a much deeper understanding than it did at the outset about the lawyers roster of clients, how she engages with different colleagues, how she likes to craft legal arguments, when she chooses to push back on clients versus acquiesce to their preferences, and so forth. A recruiter will find that, the more he uses an AI product, the more intuitively it understands which candidates he tends to prioritize, how he likes to conduct screening interviews, how he writes job descriptions, how he engages in compensation negotiations, and so on. Ditto for AI products for accountants, for doctors, for software engineers, for product designers, for salespeople, for writers, and beyond.Continual learning will enable AI to become personalized in a way that it has never been before. This will make AI products sticky in a way that they have never been before.After youve worked with it for a while, your AI model will be very different than someone elses version or the off-the-shelf version of the same model. Its weights will have adapted to you. This will make it painful and inconvenient to switch to a competing product, in the same way that it is painful and inconvenient to replace a well-trained, high-performing employee with someone who is brand new.Venture capitalists like to obsess over moatsdurable sources of competitive advantage for companies.It remains an open question what the most important new moats will be in the era of AI, particularly at the application layer.A long-standing narrative about moats in AI relates to proprietary data. According to this narrative, the more user data an AI product collects, the better and more differentiated the product becomes as it learns from that data, and the deeper the moat therefore gets. This story makes intuitive sense and is widely repeated today.However, the extent to which collecting additional user data has actually led to product differentiation and moats in AI remains limited to dateprecisely because AI systems do not actually learn and adapt continuously based on new data. How much lock-in do you as a user experience today with Perplexity versus ChatGPT versus Claude as a result of user-level personalization in those products?Continual learning will change this. It will, for the first time, unleash AIs full potential to power hyperpersonalized and hypersticky AI products. It will create a whole new kind of moat for the AI era.Continual Learnings Achilles HeelThe potential upsides of continual learning are enormous. It would unlock whole new capabilities and market opportunities for AI.The idea of continual learning is not new. AI researchers have been talking about it for decades.So: why are todays AI systems still not capable of learning continuously?One fundamental obstacle stands in the way of building AI systems that can learn continuouslyan issue known as catastrophic forgetting. Catastrophic forgetting is simple to explain and fiendishly difficult to solve.In a nutshell, catastrophic forgetting refers to neural networks tendency to overwrite and lose old knowledge when they add new knowledge.Concretely, imagine an AI model whose weights have been optimized to complete task A. It is then exposed to new data related to completing task B. The central premise of continual learning is that the models weights can update dynamically in order to learn to solve task B. By updating the weights to complete task B, however, the models ability to complete task A inevitably degrades.Humans do not suffer from catastrophic forgetting. Learning how to drive a car, for instance, does not cause us to forget how to do math. Somehow, the human brain manages to incorporate new learnings on an ongoing basis without sacrificing existing knowledge. As with much relating to the human brain, we dont understand exactly how it does this. For decades, AI researchers have sought to recreate this ability in artificial neural networkswithout much success.The entire field of continual learning can be understood first and foremost as an attempt to solve catastrophic forgetting.The core challenge here is to find the right balance between stability and plasticity. Increasing one inevitably jeopardizes the other. As a neural network becomes more stable and less changeable, it is in less danger of forgetting existing learnings, but it is also less capable of incorporating new learnings. Conversely, a highly plastic neural network may be well positioned to integrate new learnings from new data, but it does so at the expense of the knowledge that its weights had previously encoded.Existing approaches to continual learning can be grouped into three main categories, each of which seeks to address catastrophic forgetting by striking the right balance between stability and plasticity.The first category is known as replay, or rehearsal. The basic idea behind replay-based methods is to hold on to and revisit samples of old data on an ongoing basis while learning from new data, in order to prevent the loss of older learnings.The most straightforward way to accomplish this is to store representative data points from previous tasks in a memory buffer and then to intersperse those old data with new data when learning new things. A more complex alternative is to train a generative model that can produce synthetic data that approximates the old data and then use that models output to replay previous knowledge, without needing to actually store earlier data points.The core shortcoming of replay-based continual learning methods is that they do not scale well (for a similar reason as RAG-based methods, described above). The more data a continual learning system is exposed to over time, the less practicable it is to hold on to and replay all of that previous data in a compact way.The second main approach to continual learning is regularization. Regularization-based methods seek to mitigate catastrophic forgetting by introducing constraints into the learning process that protect existing knowledge: for example, by identifying model weights that are particularly important for existing knowledge and slowing the rate at which those weights can change, while enabling other parts of the neural network to update more freely.Influential algorithms that fall into this category include elastic weight consolidation (out of DeepMind), Synaptic Intelligence (out of Stanford) and Learning Without Forgetting (out of the University of Illinois).Regularization-based methods can work well under certain circumstances. They break down, though, when the environment shifts too dramaticallyi.e., when the new data looks totally unlike the old databecause their learning constraints prevent them from fully adapting. In short: too much stability, not enough plasticity.The third approach to continual learning is architectural.The first two approaches assume a fixed neural network architecture and aim to assimilate new learnings by updating and optimizing one shared set of weights. Architectural methods, by contrast, solve the problem of incremental learning by allocating different components of an AI models architecture to different realms of knowledge. This often includes dynamically growing the neural network by adding new neurons, layers or subnetworks in response to new learnings.One prominent example of an architectural approach to continual learning is Progressive Neural Networks, which came out of DeepMind in 2016.Devoting different parts of a models architecture to different kinds of knowledge helps mitigate catastrophic forgetting because new learnings can be incorporated while leaving existing parameters untouched. A major downside, though, is again scalability: if the neural network grows whenever it adds new knowledge, it will eventually become intractably large and complex.While replay-based, regularization-based and architecture-based approaches to continual learning have all shown some promise over the years, none of these methods work well enough to enable continual learning at any scale in real-world settings today.Making Continual Learning A RealityThe past year, however, has seen an exciting new wave of progress in continual learning. The advent of generative AI and large language models is redefining what is possible in this field. Suddenly, it seems that AI models that can learn and adapt as they go may be around the corner.A few leading AI startups are at the vanguard of this fast-moving field. Two worth highlighting are Writer and Sakana.Writer is an enterprise AI platform with a long list of blue-chip Fortune 500 customers including Prudential, Intuit, Salesforce, Johnson & Johnson, Uber, LOreal and Accenture.Last November, Writer debuted a new AI architecture known as self-evolving models.These models are able to identify and learn new information in real time adapting to changing circumstances without requiring a full retraining cycle, the Writer team wrote. A self-evolving model has the capacity to improve its accuracy over time, learn from user behavior, and deeply embed itself in business knowledge and workflows.How did Writer manage to build AI models that can learn continuously? How do the companys self-evolving models work?As a self-evolving model is exposed to new information, it actively self-reflects in order to identify where it has knowledge gaps. If it makes a mistake or fails a task, it reflects on what went wrong and generates ideas for improvement. It then stores these self-generated insights in a short-term memory pool within each model layer.Storing these learnings within the models individual layers means that the model can instantly access and apply this information as it processes inputs, without needing to pause and query an external source. It also enables the information in the memory pool to directly shape the models attention mechanism, making its responses more accurate and well-informed.And because the memory pools are fixed in size, they avoid the scalability challenges that plague earlier continual learning methods like replay. Rather than growing unmanageably large as the model accumulates more knowledge, these memory pools function like short-term scratchpads that are continuously updated.Then, periodically, when the model determines that it has accumulated enough important learnings in its short-term memory, it autonomously updates its weights using reinforcement learning in order to more permanently consolidate those learnings. Specifically, Writers self-evolving models use a reinforcement learning metholodogy known as group relative policy optimization, or GRPO, made popular by DeepSeek.Self-evolving models tackle catastrophic learning not by simply referencing the past, like replay-based methods, but by building a system that evolves gracefullyreflecting, remembering and adapting without losing its core, said Writer cofounder & CTO Waseem Alshikh. Its not a total departure from continual learnings roots, but its a fresh twist that leverages the latest in LLM self-improvement. This design reflects our belief that the future of AI isnt just about bigger models, but smarter, more adaptive ones. Its efficient and practical, especially for real-world applications where things change fast.Writers self-evolving models are live in deployment with customers today.Another cutting-edge AI startup that is advancing the frontiers of continual learning is Sakana AI. Based in Japan, Sakana is an AI research lab founded by leading AI scientists from Google, including one of the co-inventors of the transformer architecture.In January, Sakana published new research on what it calls self-adaptive AI. Sakanas new methodology, named Transformer (Transformer Squared), enables AI models to dynamically adjust their weights in real-time depending on the task they are presented with.Our research offers a glimpse into a future where AI models are no longer static, wrote the Sakana research team. These systems will scale their compute dynamically at test-time to adapt to the complexity of tasks they encounter, embodying living intelligence capable of continuous change and lifelong learning. We believe self-adaptivity will not only transform AI research but also redefine how we interact with intelligent systems, creating a world where adaptability and intelligence go hand in hand.Overview of Sakana's Transformer architecture.Source: Sakana AITransformer works by first developing task-specific expert vectors within an AI model that are well-suited to handle different topics (e.g., a vector for math, a vector for coding, and so on).At inference time, the system follows a two-step process (hence the name Transformer) to self-adapt depending on the context. First, the system determines in real-time which skills and knowledge (and thus which vectors) are most relevant given the current task. Second, the neural network dynamically amplifies some of its expert vectors and dampens others, modifying its base weights to tailor itself to its current situation.The Transformer method has some thematic overlap with architectural approaches to continual learning, discussed above, as well as with mixture-of-experts (MoE) systems; all of these approaches involve modular subsystems of experts housed within an AI model.Transformer performs impressively on key benchmarks like GSM8K and ARC, besting popular fine-tuning approaches like LoRA by a wide margin while requiring fewer parameters.In the words of Sakana research scientist Yujin Tang, who led this effort: Transformer is a lightweight, modular approach to adaptation. Unlike MoE, where experts emerge without explicit specialization, our method dynamically refines representations with true task-specific expertise. While not full continual learning yet, its a crucial step toward AI that evolves in real-time without catastrophic forgetting.ConclusionTodays AI models are static. Once deployed, they do not change when they are presented with new information. This is a remarkable shortcoming for any intelligent system to have. It represents a profound weakness of artificial intelligence compared to biological intelligence.But this is changingquickly. At the frontiers of AI, researchers are developing new kinds of AI models that can learn and adapt throughout their lifetimes by continuously updating their weights.Whether you call this new paradigm self-evolving AI, or self-adaptive AI, or test-time training, or (the more traditional term) continual learning, it is one of the most exciting and important areas of research in AI today.It is rapidly erasing the conventional divide between training and inference and opening up an entire new vista of capabilities for AI. It is also enabling new sources of moats and defensibility for AI-native startups.Continual learning will upend established assumptions and redefine what is possible in AI. Watch this space.Note: The author is a partner at Radical Ventures, which is an investor in Writer.
    0 Yorumlar ·0 hisse senetleri ·34 Views
  • Philips Hue might be adding a smart doorbell to its lineup
    www.digitaltrends.com
    Philips Hue might be expanding its lineup to include a smart doorbell if a leak within the Hue iOS app is any indication. The iOS app added an option to install devices with or without a QR code. Choosing the latter option brings up a list of potential devices to install, including something called the Hue Secure doorbell.Thats a pretty solid indicator that Philips Hue has something up its sleeve, although theres yet to be an official announcement about the device. Until Hue provides specs, the best we can do is guess but we can make a pretty solid guess based on the existing lineup.Recommended VideosPhilips Hue already has its Secure Camera line: the Philips Hue Secure Flood Light Camera and the Secure Battery Camera. Both top out at 1080p, so we can assume the Hue Secure Doorbell will likely also support 1080p resolution, but not 4K. As for price, we wouldnt be surprised to see it land around the $200 mark.Hue BlogThe app did reveal some information about the doorbell. Most likely, it will need to be connected to the homes power supply instead of running on batteries. This means it will probably take the place of your doorbell. And since none of the other items in the Secure Camera line work with Apple HomeKit, this new entry isnt likely to do so, either.Please enable Javascript to view this contentThis is an early leak, and it seems to have already been removed from the Hue app. We couldnt find it during our testing. Thanks to some eagle-eyed enthusiasts, we have screenshots of the app screens. Its possible this doorbell is an unannounced item intended to be part of the recent SmartThings collaboration. With any luck, Hue will make an official announcement regarding the doorbell soon.Editors Recommendations
    0 Yorumlar ·0 hisse senetleri ·38 Views
  • Apple MacBook Air 13 (M4) vs. Microsoft Surface Pro 11: two diminutive laptops fight for the top
    www.digitaltrends.com
    Table of ContentsTable of ContentsSpecs and configurationsDesignPerformanceDisplay and audioPortabilityWant a tablet, get the Surface Pro 11, but otherwise, the MacBook Air is for youIt might seem strange to compare a clamshell laptop with a detachable tablet 2-in-1, but the fact is, if youre look for a small yet powerful, highly portable PC, you have two great options. The Apple MacBook Air 13 (M4) is the best 13-inch laptop you can buy (maybe the best ever), and the Microsoft Surface Pro 11 is the best 2-in-1.Both are very small, both are fast and get great battery life, and so both can serve anyone well who wants a real PC that feels more like a mobile device. But which one is the right choice for you?Recommended VideosSpecs and configurationsApple MacBook Air 15 (M4)Microsoft Surface Pro 11Dimensions11.97 x 8.46 x 0.44 inches11.3 x 8.2 x 0.37 inchesWeight2.7 pounds1.97 pounds (tablet only)ProcessorApple M4 (10 core)Qualcomm Snapdragon X PlusQualcomm Snapdragon X EliteGraphics8 core GPU10 core GPUQualcomm AdrenoRAM16GB unified memory24GB unified memory32GB unified memory16GB32GBDisplay13.6-inch 2560 x 1664 LED IPS display at 60Hz13-inch (2880 x 1920) IPS, 120Hz13-inch (2880 x 1920) OLED, 120HzStorage256GB SSD512GB SSD1TB SSD2TB SSD256GB SSD512GB SSD1TB SSDTouchNoYesPorts2 x USB-C with Thunderbolt 41 x MagSafe 3 for charging1 x 3.5mm audio jack2 x USB4WirelessWi-Fi 6E and Bluetooth 5.2Wi-Fi 7 and Bluetooth 5.4Webcam12MP Center Stage camera with Desk View12-megapixel front camera10-megapixel rear cameraOperating systemmacOS SequoiaWindows 11 on ArmBattery53.8 watt-hour battery48 watt-hourPrice$999+$1,300+Rating5 out of 5 stars4.5 out of 5 starsRelatedThe MacBook Air 13 (M4) starts at $999 with a 10-core CPU, 8-core GPU M4 chipset, 16GB of RAM, and a 256GB SSD. Thats a break from the past, where Apple used to keep the previous generation machine around at the lowest price. From there, its $100 to upgrade to a faster M4 with a 10-core GPU, then $200 for 24GB of RAM and $400 for 32GB. Storage can be upgraded to 512GB for $200 and up to 2TB for an additional $800. The most expensive model is $2,199.The Surface Pro 11 starts at $999 with a Snapdragon X Plus chipset, 16GB of RAM, a 256GB SSD, and an IPS display, so the same as the MacBook Air. The OLED version starts at $1,499 with a faster Snapdragon X Elite chipset. It, too, has a variety of configuration options, with the highest-priced model at $2,499 with OLED, 64GB of RAM, and a 1TB SSD.Right now, the Surface Pro 11 is on sale, something Apple rarely does with its current machines. So, the prices are closer together, with the Surface Pro 11 being a bit less expensive depending on the configuration. Neither will be confused for a budget laptop.Luke Larsen / Digital TrendsThe MacBook Air 13 is very possibly the perfect 13-inch laptop design. Maybe the perfect 14-inch laptop, depending on how you want to classify a laptop with a 13.6-inch display. Its incredibly thin, light but not flimsy, and it exudes quality. Just opening and closing the perfectly-designed hinge provides a visceral impression of great manufacturing. At the same time, the Surface Pro 11 is equally well-made. Its also all-metal, it has the best built-in kickstand on any tablet today, and its thin and light enough to be very portable. Snapping on the keyboard makes it a little less thin and light, but it remains on of the easiest laptops to carry around.Both laptops also look great. The MacBook Air has several attractive colors and a very cohesive aesthetic that mimics every MacBook made today is both minimalist and elegant. The Surface Pro 11 also comes in several colors with detachable keyboards to match. Its a simple slate with rounded corners and a simplistic design. Its very attractive as well.The difference, of course, is in their form factor. The MacBook Air is a standard clamshell that will be immediately familiar to most laptop users, while the Surface Pro 11 is a tablet that serves dual functions. According to our reader, its not as great a tablet as it is a laptop, primarily because Windows 11 just doesnt provide the same touch experience as, say, iPadOS. And when connected to its keyboard, the Surface Pro 11 isnt as stable on anything other than a firm surface. So, if you want a normal laptop experience, then the MacBook Air is the better choice. But if you want a tablet with pen capabilities for digital drawing to go with a laptop, then the Surface Pro 11 is for you.The MacBook Airs keyboard is superior, and in fact, Apple Magic Keyboard is really the best laptop keyboard around. The keycaps are perfectly sized, the spacing is excellent, and the switches are light and snappy. For anyone who types a lot, its the best experience. The Surface Pro 11s detachable Surface Pro Flex keyboard, which costs extra, is also very good, with quality switches and a functional, if cramped, layout. But typing on the keyboard has a bit of a bounce when its propped up at an angle. That might bother some people.The MacBook Airs Force Touch haptic touchpad is also the best available on a laptop today. Its very large, perfectly responsive, and has the additional Force Click feature where pressing a little harder invokes additional functionality. The Surface Pro Flex keyboard also has a haptic touchpad that works well, but its quite a bit smaller. Of course, the Surface Tablet 11s display is touch- and pen-enabled, and when mated with the Surface Slim pen, even adds in haptic feedback when writing and drawing on the display. If youre a digital artist, the choice is clear.Connectivity is closely matched. Both have just modern USB-C ports, the MacBook Air with Thunderbolt 4 and the Surface Pro 11 with USB4. Both have proprietary power adapters that keep both ports free when charging. The MacBook Air has a 3.5mm audio jack that the Surface Pro 11 lacks, while the latter has a nanoSIM for optional cellular wireless that the former lacks. And the Surface Pro 11 has more up-to-date wireless connectivity.Finally, the two laptops both have 12MP webcams, while the Surface Pro 11 adds a 10MP rear camera. The MacBook Air benefits from better low-light performance and the Center Stage feature that automatically centers the user when moving around. It also has Desk View that can share a desktop view combined with a picture-in-picture video. The Surface Pro 11 uses its fast Neural Processing Unit for Copilot+ PC AI features, while the MacBook Airs fast Neural Engine doesnt have quite as much use today. AI features are evolving, though, so thats probably not a reason to choose on over the other.Mark Coppock / Digital TrendsThee Surface Pro 11 uses either the 10-core Qualcomm Snapdragon X Plus or the 12-core Snapdragon X Elite chipsets, which are aimed at combing faster performance with higher efficiency than weve seen from past generations of Windows laptops. Graphics are powered by the Adreno integrated GPU. The MacBook Air 15 has Apples latest M4 chipset, with 10 CPU cores and eight or 10 GPU cores. Apple Silicon has always been about both performance and efficiency.In our benchmarks, the MacBook Air 13 is the faster laptop, both in multi-core processing and in single-core processing, where its a lot faster. For typical productivity tasks, even the most demanding, the MacBook Air will be a lot more responsive. Neither is a gaming laptop, but the MacBook Air also benefits from various CPU optimizations that make it faster for moderate creative tasks as well, such as video editing.Geekbench 6(single/multi)Cinebench R24(single/multi/battery)3DMarkWild Life ExtremeMacBook Air 15(M4 10/8)3,751 / 14,801172 / 8547,827Surface Pro 11(SnapDragon X Elite X1E-80-100 / Adreno)2,365 / 13,339106 / 5236,128Luke Larsen / Digital TrendsThe MacBook Air has a 13.6-inch 16:10 2560 x 1664 IPS display thats very bright, has wide and accurate colors, and very good contrast for the technology. The Surface Pro 11 comes with two display options, both 13.0-inch 3:2 at 2880 x 1920, one IPS and one OLED. We tested the OLED version, and it, too, is bright with wide and even more accurate colors. It has OLEDs usual inky blacks, though, which gives it an edge.Both displays are very good and will please the vast majority of users. The Surface Pros OLED panel will use more power, which is its primary downside.The MacBook Air has a four-speaker audio system with force-cancelling woofers. Its probably the best sound system on a 13-inch (or 14-inch) laptop, with plenty of volume, clear mids and highs, and surprising bass. The Surface Pro 11s dual, side-firing speakers are just okay by comparison.MacBook Air 15(IPS)Surface Laptop 7(IPS)Brightness(nits)532561AdobeRGB gamut85%85%sRGB gamut100%100%DCI-P3 gamut97%95%Accuracy(DeltaE, lower is better)0.741.27Contrast22,680:11,440:1Mark Coppock / Digital TrendsAs mentioned above, these are two eminently portable laptops that still provide great performance when using real PC operating systems. Youll barely notice that youre carrying them around.Both laptops get better great battery life, but the MacBook Airs is better. Its likely that the IPS version of the Surface Pro 11 will be more comparable. Youll get more than a full days work out of both machines.WebVideoApple MacBook Air 15(M4 10/10)16 hours, 30 minutes20 hours, 31 minutesSurface Pro 11(SnapDragon X Elite X1E-80-100)14 hours, 39 minutes16 hours, 36 minutesTheres a reason why I gave the MacBook Air (M4) a perfect score. Its the best small laptop ever made, in my opinion, with a sublime build, great performance, and awesome battery life. The keyboard and touchpad are the best you can buy, and the display is great for every use.At the same time, the Surface Pro 11 is the best 2-in-1 ever made. Its not as great as a laptop as the MacBook Air, but its also a decent tablet. So, if thats what you want, buy the Surface. But everyone else should just buy the MacBook Air and be done with it.Editors Recommendations
    0 Yorumlar ·0 hisse senetleri ·37 Views
  • I'm 41 and a mom to triplets. I did the math and would need 6 figures just to cover the day care fees.
    www.businessinsider.com
    2025-03-23T20:49:01Z Read in app Leila Green has struggled with mom guilt since having her triplets. Courtesy of Leila Green This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? Leila Green is a 41-year-old mom of 2-year-old triplets living in England.She had to learn how to feed and get three babies to sleep alone, which triggered anxiety.Mom guilt has been a constant battle she is now fighting back against.This as-told-to essay is based on a conversation with Leila Green. It has been edited for length and clarity.I remember a paralyzing shock running through me when I found out I would be having triplets. My husband and I went for a scan to check for a heartbeat, sitting in the same waiting room we previously sat in the two times we found out I had miscarried.Lying down on the table to be scanned by the sonographer, I could tell something was up. I desperately tried not to freak out, recalling the bad news we'd had in my last two pregnancies.The sonographer flipped the monitor for us to see. "We've got heartbeats," he said. "There are three of them."I struggled with guiltWe planned that I'd have a C-section with 28 doctors and nurses in the room a team for me and a team for each of the babies.Although I knew in advance that the babies would be taken straight to the NICU, I didn't realize how traumatic it would be to be separated from my babies. We'd been a team the three of them in me for months, and then they were just taken away. It felt so wrong.A few days after they were born, two of my babies had to be moved to a different hospital, while the one other baby and I stayed in the original hospital. That was the worst day of my life. I just didn't imagine this would be my start to motherhood.Five weeks after they were born, I brought two babies home, leaving my one child who needed more support in the hospital. This is the point where mom guilt set in for the first time.I could never be in more than one place at a time. If I was feeding the babies at home, I wasn't feeding the baby in the hospital. There was never enough mommy to go around. There didn't seem to be a winning option I just felt I was always letting one of them down.Finally, at nearly six weeks, all of my babies were home. My husband and I were new to parenting and didn't know what we were doing with one baby, let alone three.We hired helpWe set up a cot and a changing station downstairs and upstairs.Feeds, a combination of breast milk and formula, happened every three hours. They all fed at the same time. I'd breastfeed one while the other two fed from a bottle in their bassinet. We used a muslin blanket to prop the bottle in place for the two in the bassinets, arranging that they could turn away when they didn't want anymore.I had huge amounts of anxiety around feed times. Whenever someone wanted to visit, I tried to make sure they came during a feed so I wasn't doing it alone.From a very early age, we had a strict bedtime routine and overnight schedule something suggested by a maternity nurse we hired to help us for a week when we all arrived home. We were militant and inflexible because anything that would give us a minute more sleep was worth doing.We tried to keep them all on the same schedule overnight, which meant when one woke up for a feeding, we fed them all. Otherwise, we'd be getting one back to sleep just as another woke.Those moments in the middle of the night, when I was just exhausted and wanted to sleep I felt like I was going to break. I remember times when one would roll over or whack each other in the face, waking everyone up it was soul-destroying.I needed 6 figures to cover the costs of day careOne of our best decisions was to hire a nanny to come in from 6 a.m. to 9 a.m. a few mornings a week. That way I knew, even if I had a terrible night, I could catch up on sleep if she was there.Once the kids turned 1, I started looking at the possibility of returning to work. I had co-founded a company alongside my brother. I worked out that I would have to make at least 85,000 (around $109,400) just to break even on the nursery fees.While I know a lot of women will do this for a number of reasons, I decided it wasn't something I was willing to do.My career had been an integral part of my identity, so I had to go through stages of grief until I finally accepted it was over. It was a huge cost I paid in having triplets.Because we live in the UK, we qualified for 15 hours of free childcare in September 2024 when the babies were 2.I'm using the time for exercise and building anew brand, "F*** Mum Guilt," which will host events for moms about mom guilt.It has been amazing to have the time and brain space to think about and build a brand from scratch to have time to do something for me and other moms.Now that they are toddlers, we're facing a new set of challenges.The other day, the three of them worked out how to push a brother over the stair gate at their bedroom to go downstairs, pull a chair up to a top cupboard to get the cookies, and then bring them back to distribute to the two left behind. It's like they are ganging up on me. The relentless illnesses have also been a challenge. Whatever one of them gets, they all get. Theoretically, they are enrolled in 15 hours of childcare, but they seem to rarely be there because they are always ill.Most of the time, I'm just firefighting sorting out everything that needs to get done. I naturally think about all the things I'm failing at, but when I have moments to reflect on the last two years, I consider that I have raised three babies at the same time. And now I'm raising three toddlers. That's pretty incredible.
    0 Yorumlar ·0 hisse senetleri ·39 Views
  • The Lantern House / NAW Studio
    www.archdaily.com
    The Lantern House / NAW StudioSave this picture! DinhRArchitects: NAW StudioAreaArea of this architecture projectArea:220 mYearCompletion year of this architecture project Year: 2024 PhotographsPhotographs:DinhR Lead Architects: NAW More SpecsLess SpecsSave this picture!Text description provided by the architects. The problem set for Naw Studio this time is to create a home for a family in which the homeowner has a great love and many karmas associated with modern Japanese architecture throughout their youth journey. And from the initial concept sketches in that spirit, the Lantern was born.Save this picture!In the midst of the bustling new urban area of Hoa Xuan, The Lantern is designed to create a feeling of peace, in the form of modern architecture. With a basic square area of 100m2, facing East and West with many impacts of the typical climate of the Central region, the house must optimize space, but still have to create a connection between nature and people, and at the same time perfectly meet the living needs of a family of 4. The east facade of the house is a simple but meaningful combination of shapes. As a game of materials, the setbacks and hollow solid walls are arranged purposefully, helping to regulate light and take advantage of natural ventilation. Most importantly, the effect of catching the early sunlight on the materials will be extremely eye-catching.Save this picture!Save this picture!Save this picture!The space inside the house does not have a traditional living room, instead, the kitchen and backyard play the central role of connection, so the whole family gathers together. The ground floor opens to a spacious space with an adjacent kitchen, directly connected to the backyard - the heart and lungs of the house, becoming the place where life and nature intersect, keeping the space always airy and fresh. The upper floors are designed as open spaces, helping to increase the connection between floors and create a more spacious visual effect. The system of doors and openings is arranged appropriately along with the backyard to circulate natural air, perfectly regulating the microclimate inside the house.Save this picture!Save this picture!Save this picture!Save this picture!Overall, the house is a harmonious combination of modern Japanese architectural style and Vietnamese climatic conditions. Not only is it a place to live, but it also reflects the spirit of simple living, connecting family and nature, bringing a gentle and relaxing living space in the heart of Da Nang.Save this picture!Project gallerySee allShow lessAbout this officeNAW StudioOfficePublished on March 23, 2025Cite: "The Lantern House / NAW Studio" 23 Mar 2025. ArchDaily. Accessed . <https://www.archdaily.com/1028051/the-lantern-house-naw-studio&gt ISSN 0719-8884Save!ArchDaily?You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Yorumlar ·0 hisse senetleri ·36 Views
  • Video Game Heroes Who Canonically Went Corrupted
    gamerant.com
    Not every hero is perfect. It's impossible for anyone to stay pure, since it's impossible for every choice to provide beneficial consequences for everyone. Sacrifices must be made with certain decisions, which is the major truth when living on some form of Earth.
    0 Yorumlar ·0 hisse senetleri ·26 Views