0 Commenti
0 condivisioni
24 Views
Elenco
Elenco
-
Effettua l'accesso per mettere mi piace, condividere e commentare!
-
GAMERANT.COM'There's an Opportunity There' Diablo 4 Boss Comments on Potential Switch 2 PortWhile unconfirmed, Diablo executive producer Rod Fergusson says there seems to be a potential path for a Nintendo Switch 2 port of Diablo 4. In a recent conversation, Fergusson said that there is an "opportunity" to bring Diablo 4 to Nintendo's upcoming handheld console. The Switch 2 will be getting numerous ports of older games, with Diablo 4 being considered a good fit by both fans and, seemingly, Fergusson himself.0 Commenti 0 condivisioni 20 Views
-
GAMERANT.COMBoruto: Sarada Uchiha's Sun Goddess Powers, ExplainedBoruto: Two Blue Vortex is heating up more than ever before, especially as it dropped what came to be considered the very best chapter so far. Boruto: Two Blue Vortex chapter 21 focused on the events of the battle in the Land of Wind, and finally, Sarada Uchiha took center stage here and fans got the chance to see her take her powers to the next level.0 Commenti 0 condivisioni 20 Views
-
WWW.POLYGON.COMOblivion Remastered sure has a lot of brownThe Oblivion Remastered game is finally here and it looks, well, modern. And by modern I mean brown. The mud’s brown, the water’s brown, and the grass is brown. It’s all extremely brown and I’m not sure how to feel about that as someone who was wowed by Bethesda’s bright fantasy world 19 years ago. Comparing screenshots between old Oblivion and Oblivion Remastered make it pretty obvious: There’s a thick haze hanging over everything that darkens every scene. The entire world looks like a bonus level in PowerWash Simulator waiting to be blasted back to its original look. In terms of detail, it certainly looks improved. Torches glow in the darkness, water shimmers in the sunlight, and docks look like actual wood and not mossy stone. But the actual mossy stone is now charcoal for some reason. The original game was much brighter, lightening up the grassy hills and trees, but Unreal Engine 5 has the remaster looking burnt. Interior sections in Oblivion Remastered are a bit better. The amazing intro to the game where you escape prison with Patrick Stewart is all suffocating stone corridors and the occasional beam of light bleeding through the ceiling. It’s surprisingly close to the chilly vibe of the original. In fact, it might be an example of what the improved lighting gets you when it’s not working with overcooked textures. The pale character faces from the original didn’t survive the remaster, which is mostly an improvement to me. In a perfect world, we’d get the weirdly smeared NPC mugs of the past in 4K, but I’ll take the hyper-detailed wrinkles and pores if it means better facial animations can exist. And don’t worry, when they open their mouths, they still sound just as goofy as before. I just can’t get over how muddy the outside has become in the remaster. But I’m also the person who won’t forgive Virtuos for turning the stylistic Dark Souls bonfire into a realistic fire in Dark Souls: Remastered. It’s also entirely possible that Oblivion Remastered’s visuals coalesce the deeper you get into it and explore new areas. If they don’t, I’m at least happy that the original Oblivion is still available whenever I want to go back.0 Commenti 0 condivisioni 27 Views
-
WWW.POLYGON.COMWatch us play an entire match of Bungie’s new extraction shooter, MarathonHave you been wondering what a full match of Marathon looks like? We partnered with Destiny YouTuber Skarrow9 to bring you just that, which you can find in the video linked above. It’s been a strange two weeks for Bungie’s Marathon, the extraction shooter the Destiny studio just re-revealed on April 12. After largely positive (though skeptical of the game’s ability to succeed in the current market) previews — including ours — sentiment around the reveal even has been quite low. One of the biggest fears players seem to have is that the game will be too intense and too high octane for the average fan to enjoy. But while the game is hardcore, there is plenty of downtown for players to catch their breath and relax away from the threat of other players. Unfortunately, due to Bungie’s community highlight video during the reveal, flashy trailer, and time restrictions on content creators’ video lengths, the high-stress player versus player combat is all fans really saw over the course of the reveal weekend. After seeing the confusion around the game’s identity and talking to our stellar video team here at Polygon, we decided the best route was to take the footage I capture during my time at Bungie for the game’s preview event, and turn it over to a channel and creator that specializes in Marathon content. I reached out to YouTuber Skarrow9. Skarrow is known for solving some of Destiny 2’s most involved secrets and then creating guides on how other players can follow in his footsteps. He’s also been involved in Marathon playtests on and off for months and has been one of the most prominent and successful solvers for Bungie’s extremely detailed Marathon ARGs. The goal with Skarrow was to show a single match with no cuts* (three minutes of looting are missing from the video due to time restrictions, but the flow of the match remains unchanged). As someone who didn’t know what to expect when I first played the game at Bungie, I felt people should see what it looks like from the moment you drop into the map to the moment you extract. The video also features commentary from Skarrow and myself, discussing things we thought warranted further explanation. It all comes together to show a pretty good representation of an average Marathon match, with plenty of looting, taking on PvE enemies, and clearing out other squads of players. As for when other players can get their hands on the game, the first official Marathon alpha test begins on Wednesday for players in North America, and has limited access. Bungie has, however, promised that more players will be able to test the game out before its Sept. 23 release date. Disclosure: This article is possible due to footage Polygon was able to capture at a Marathon preview event held at Bungie’s headquarters in Bellevue, Washington, from April 2-4. Bungie provided Polygon’s travel and accommodations for the event. You can find additional information about Polygon’s ethics policy here.0 Commenti 0 condivisioni 25 Views
-
UXDESIGN.CCHow to bridge the gap (and work effectively) at siloed organizationsThe #1 problem designers face? Seeming like a bad blind dateContinue reading on UX Collective »0 Commenti 0 condivisioni 21 Views
-
UXDESIGN.CCThe AI trust dilemma: balancing innovation with user safetyFrom external protection to transparency and user control, discover how to build AI products that users trust with their data and personal information.Image generated by AIWe’re standing at the edge of a new era shaped by artificial intelligence, and with it comes a serious need to think about safety and trust. When AI tools are built with solid guardrails and responsible data practices, they have the power to seriously change how we work, learn, and connect with each other daily.Still, as exciting as all this sounds, AI also makes a lot of people uneasy. There’s this lingering fear — some of it realistic, some fueled by headlines — that machines could replace human jobs or even spiral out of our control. Popular culture hasn’t exactly helped either; sci-fi movies and over-the-top news coverage paint AI as this unstoppable force that might one day outsmart us all. That kind of narrative just adds fuel to the fear.There’s also a big trust gap on the business side of things. A lot of individuals and companies are cautious about feeding sensitive information into AI systems. It makes sense — they’re worried about where their data ends up, who sees it, and whether it could be used in ways they didn’t agree to. That mistrust is a big reason why some people are holding back from embracing AI fully. Of course, it’s not the only reason adoption has been slow, but it’s a major one.The safety and trust triadWhen it comes to AI products — especially things like chatbots — safety really boils down to two core ideas: data privacy and user trust. They’re technically separate, but in practice, you almost never see one without the other. For anyone building these tools, the responsibility is clear: keep user data locked down and earn their trust along the way.From what I’ve seen working on AI safety, three principles consistently matter:People feel safe when they know there are protections in place beyond just the app.They feel safe when things are transparent, not just technically, but in plain language too.And they feel safe when they’re in control of their own data.The Safety and Trust Triad patternEach of these stands on its own, but they also reflect the people you’re building for. Different products call for different approaches, and not every user group reacts the same way. Some folks are reassured by a simple message like “Your chats are private and encrypted.” Others might want more, like public-facing security audits or detailed policies laid out in plain English. The bottom line? Know your audience. You can’t design for trust if you don’t understand the people you’re asking to trust you.1. Users feel safe when they know they are externally protectedLegal regulationsDifferent products and markets come with different regulatory demands. Medical and mental health apps usually face stricter rules than productivity tools or games.Privacy laws also vary by region. In the EU, GDPR gives people strong control over their data, with tough consent rules and heavy fines for violations. The U.S. takes a more fragmented approach — laws like HIPAA (healthcare) and CCPA (consumer rights) apply to specific sectors, focusing more on flexibility for businesses than sweeping regulation. Meanwhile, China’s PIPL shares some traits with GDPR but leans heavily on government oversight and national security, requiring strict data storage and transfer practices.PIPL: Personal Information Protection LawWhy does this matter?Ignoring these regulations isn’t just risky — it can be seriously expensive. Under GDPR, fines can hit up to 4% of global annual revenue. China’s PIPL goes even further, with potential penalties that could shut your operations down entirely. Privacy is a top priority for users, especially in places like the EU and California, where laws like the CCPA give people real control over their data. They expect clear policies and transparency, not vague promises.When you’re building an AI chatbot — or planning your broader business strategy with stakeholders — these legal factors need to be part of the conversation from day one.If your product uses multiple AI models or third-party tools (like analytics, session tracking, or voice input), make sure every component is compliant. One weak link can put your entire platform at risk.Emergency handlingAnother critical piece of building responsible AI is planning for emergencies. Say you’re designing a role-playing game bot, and mid-conversation, a user shares suicidal thoughts. Your system needs to be ready for that — pause the interaction, assess what’s happening, and take the right next steps. That could mean offering crisis resources, connecting the user to a human, or, in extreme cases, alerting the appropriate authorities.Character.io: mental health crisis help message.But it’s not just about self-harm. Imagine a user admitting to a serious crime. Now you’re in legal and ethical gray territory. Do you stay neutral? Flag it? Report it? The answer isn’t simple, and it depends heavily on the region you’re operating in.Some countries legally require reporting certain admissions, while others prioritize privacy and confidentiality. Either way, your chatbot needs clear, well-defined policies for handling these edge cases before they happen.Preventing bot abusePeople push the limits of AI for all sorts of reasons. Some try to make it say harmful or false things, some spam or troll just to see what it’ll do, and others try to mess with the system to test its boundaries. Sometimes it’s curiosity, sometimes it’s for fun — but the outcome isn’t always harmless.Stopping this behavior isn’t just about protecting the bot — it’s about protecting people. If the AI generates misinformation, someone might take it seriously and act on it. If it’s pushed into saying something toxic, it could be used to hurt someone else or reinforce bad habits in the user who prompted it.Flagged message for violating content guidelines.Take misinformation, for example. If someone tries to make the AI write fake news, the goal isn’t just to block that request. It’s to stop something potentially damaging from spreading. The same goes for harassment. If someone’s trying to provoke toxic or harmful replies, we intervene not just to shut it down, but to make it clear why that kind of behavior matters.In the long run, it’s about building systems that support better conversations — and helping people recognize when they’ve crossed a line, even if they didn’t mean to.Safety AuditsMany AI products claim to conduct regular safety audits. And they should, especially in the case of chatbots or personal assistants that interact directly with users.But sometimes, it’s hard to tell how real those audits are. That doubt grows when you check a company’s team page and see only one or two machine learning engineers. If the team seems too small to realistically perform proper safety checks, it’s fair to question whether these audits are truly happening, or if they’re just part of the marketing pitch.If you want to build credibility, you need to do the work — and show it. Run actual safety audits and make the results public. It doesn’t have to be flashy — just transparent. A lot of crypto projects already do this with security reviews. The same approach can work here: show your commitment to privacy and safety, and users are much more likely to trust you.Backup AI modelsOpenAI introduced the first GPT model (GPT-1) in 2018. Despite seven years of advancement, GPT models can still occasionally freeze, generate incorrect responses, or fail to reply at all.OpenAI status pageFor AI professionals, these issues are minor — refreshing the browser usually resolves them. But for regular users, especially paying subscribers, reliability is key. When a chatbot becomes unresponsive, users often report the problem immediately. While brief interruptions are frustrating but tolerable, longer outages can lead to refund requests or subscription cancellations — a serious concern for any AI product provider.One solution, though resource-intensive, is to implement a backup model. For instance, GPT could serve as the primary engine, with Claude (or another LLM) as the fallback. If one fails, the other steps in, ensuring uninterrupted service. While this requires more engineering and budget, it can greatly increase user trust, satisfaction, and retention in the long run.2. Users feel safe when the experience is transparentOpen communication“Honesty is the best policy” applies in AI just as much as anywhere else. Chatbots can feel surprisingly human, and because we tend to project emotions and personality onto technology, that realism can be confusing — or even unsettling. This is part of what’s known as the uncanny valley, a term coined by Masahiro Mori in 1970. While it originally referred to lifelike robots, it also applies to AI that talks a little too much like a real person. That’s why it’s so important to be upfront about what the AI is — and isn’t. Clear communication builds trust and helps users feel grounded in the experience.Clear AI vs. human rolesWhen designing AI chat experiences, it’s important to make it clear that there’s no real person on the other side. Some platforms, like Character.io, handle this directly by adding a small info label inside the chat window. Others take a broader approach, making sure the product description and marketing clearly explain what the AI is and what it’s not. Either way, setting expectations from the start helps avoid confusion.Character.io: example of a disclaimerBe Clear About LimitationsAnother key part of designing a responsible AI experience, especially when it comes to a specialized bot, is being upfront about what it can and can’t do. You can do this during onboarding (with pop-ups or welcome messages) or in real-time, when a user runs into a limitation.Examples of limitation disclaimersLet’s say a user is chatting with a role-play bot. Everything’s on track until they ask about current events. In that moment, the bot—or its narrator—should gently explain that it wasn’t built for real-world topics, helping the user stay grounded in the experience without breaking the flow.Respect users’ privacyOne of the most important parts of building a chatbot is keeping conversations private. Ideally, chats should be encrypted and not accessed by anyone. But in practice, that’s not always the case. Many AI chatbot creators still have full access to user sessions. Why? Because AI is still new territory, and reviewing conversations helps teams better understand and fine-tune the model’s behavior.If your product doesn’t support encrypted chats and you plan to access conversations, be upfront about it. Let users know, and give them the choice to opt out, just like Gemini does.Gemini: privacy disclaimerSome chats may contain highly sensitive info, and accessing that without consent can lead to serious legal issues for you and your investors. In the end, transparency isn’t just ethical — it’s necessary to earn and keep users’ trust.Reasoning & sourcesAI hallucinations still happen — just less often than before. It’s when the model gives an answer that sounds right but is actually false, misleading, or entirely made up. These issues usually come from gaps in training data and the fact that AI predicts language without truly understanding it. For users, it can feel unpredictable and unreliable, leading to a general lack of trust in AI systems.One way to fix that? Transparency. Showing users where the information is coming from — even quoting exact paragraphs from trusted sources — goes a long way in building confidence.Gemini: reasoning & sourcesAnother great addition is real-time reasoning. If the assistant is doing online research, it could show the actual steps it’s taking, along with the logos or URLs of the sources it’s pulling from. These small touches make the whole experience feel more grounded, trustworthy, and accountable.Easily discoverable feedback formWhen launching an AI product, users tend to give a lot of feedback, especially early on. Most of it falls into two main categories:Technical issues — bugs, unexpected behavior, or problems caused by third-party components.Feature requests — missing functions or ideas for improving the experience.Feedback modalFor example, in one product I worked on, users reported an issue with emoji handling in voice mode. The text-to-speech system struggled with processing emojis, creating an unpleasant noise instead of skipping or interpreting them naturally. This issue never appeared during internal testing, and we only discovered it through user feedback. Fortunately, the fix was relatively simple.3. Users feel safe when they have control over their dataLet people decide what they want the assistant to rememberOne of the biggest strengths of AI is its ability to personalize, offering timely, relevant responses without users having to spell everything out. It can anticipate needs based on past chats, behavior, or context, creating a smoother, smarter experience.Gemini: memory settingsBut in practice, it’s more complicated. Personalization is powerful, but when it happens too quickly — or without clear consent — it can feel invasive, especially if sensitive topics are involved.The real problem? Lack of control. Personalization itself isn’t the issue — it’s whether the user gets to decide what’s remembered. To feel ethical and respectful, that memory should always be something the user can review, edit, or turn off entirely.The downside of personalizationThere’s a common belief that some tech companies listen to our conversations to serve us better-targeted ads. While giants like Google and Facebook haven’t confirmed this, a few third-party apps have been caught doing exactly that.Sometimes, ads are so specific it feels like your phone must be eavesdropping. But often, it’s just highly advanced tracking — using your search history, location, browsing habits, and even subtle online behavior to predict what you might want.Whether active listening is real or not, this level of personalization can backfire. Instead of feeling smart or helpful, it makes users feel watched. It creates mistrust, raises privacy concerns, and gives people the sense they’ve lost control over their data.Ethical and enjoyable AI personalisation patternWhat makes AI personalization feel rightFor AI personalization to feel ethical — and actually enjoyable — it needs to be built around the user, not just the data. That means:Transparent — People should know exactly what’s being collected, how it’s used, and why. Clarity builds trust.User-controlled — Let users decide how much personalization they’re comfortable with. Give them the tools to adjust it.Context-aware — Personalization should grow over time. It should feel natural, not like the AI is watching your every move from the start.The real challenge isn’t how much we can personalize — it’s how much users are actually okay with. Give them control, and they’ll lean in. Take it away, and even the smartest AI starts to feel creepy.Adding messages to the memoryFor example, in a therapeutic chatbot, users could:Choose what the AI remembers — manually selecting which personal details should be saved.Delete specific memories — giving users the ability to forget things, instead of the AI storing everything by default.Flag sensitive topics — so the AI can avoid them or respond more gently, giving users a greater sense of safety.Switch to Incognito Mode — allowing users to open up without anything being remembered.By putting users in charge of what’s remembered and how it’s handled, the experience becomes empowering, not invasive. It’s about personalization with consent, not assumption.GPT: temporary chatOffer users local conversation storageAs I dive deeper into privacy in AI chatbots, one approach keeps standing out: giving users the option to store conversations locally. A few products already do this, but it’s still far from the norm.Storing data on the user’s device offers maximum privacy — no one on the app side can access any messages, yet the chatbot stays fully functional. It’s a model that puts control back in the user’s hands. In many ways, it feels like a near-perfect solution.https://medium.com/media/a0a991fc20fc6d829678506af01eaa5b/hrefWhile local conversation storage offers strong privacy benefits, it also comes with a few challenges:User confusion — Less tech-savvy users might not understand why their chat history is missing across devices. Unlike cloud storage, local storage is tied to a single device, which can lead to frustration.Storage limits — Text is lightweight, but over time, longer chats or AI-generated content (like documents or images) can add up, especially for users who use AI frequently.No persistent memory — Since the data never leaves the device, the AI can’t “remember” past conversations unless the user brings them up manually. One workaround is temporarily re-sending old messages to the bot during a session, but that can increase data usage and slow things down.External APIs — If your app uses third-party services, you’ll need to double-check that they comply with local data storage policies, especially when sensitive information is involved.Local conversation storage: challengesOffer App-Specific Password ProtectionOne often-overlooked but valuable privacy feature is app-specific PIN protection, similar to what we see in banking apps. Before accessing their account, users are asked to enter a PIN, password, or use face recognition.Chatbots can hold highly sensitive conversations, so applying the same kind of protection makes sense. Requiring users to verify their identity before opening the app adds an extra layer of security, ensuring that only they can access their chat history.Revolut, Wise: PIN entry screensConclusionAs we’ve seen throughout this article, building trust in AI products means putting real thought into safety, transparency, and user control. There’s no one-size-fits-all solution — approaches need to be tailored to the market, the regulations, and most importantly, the users themselves.Strong privacy protections benefit everyone, not just users, but also product teams and investors looking to avoid costly mistakes or damage to reputation. We’re still in the early days of AI, and as the technology grows, so will the complexity of the challenges we face.The future of AI is full of potential — but only if we design with people in mind. By creating systems that respect boundaries and earn trust, we move closer to AI that genuinely supports and enhances the human experience.References I recommend going through:Growing public concern about the role of artificial intelligence in daily life by Alec Tyson and Emma Kikuchi for Pew Research CenterSome frontline professionals reluctant to use AI tools, research finds by Susan Allot for Civil Service WorldData Privacy Regulations Tighten, Forcing Marketers to Adapt by Md Minhaj KhanI Asked Chat GPT if I Could Use it as a Teen Self-Harm Resource by Judy DerbyTay: Microsoft issues apology over racist chatbot fiasco by Dave Lee for BBCNewtonX research finds reliability is the determining factor when buying AI, but is brand awareness coloring perceptions? by Winston Ford, NewtonX Senior Product ManagerThe Creepy Middle Ground: Exploring the Uncanny Valley Phenomenon by Vibrant JellyfishChai App’s Policy Change (Reddit thread)What are AI hallucinations? by IBMUnderstanding Training Data for LLMs: The Fuel for Large Language Models by Punyakeerthi BL92% of businesses use AI-driven personalization but consumer confidence is divided by Victor Dey for VentureBeatIn Control, in Trust: Understanding How User Control Affects Trust in Online Platforms by Chisolm Ikezuruora for privacyend.comThe AI trust dilemma: balancing innovation with user safety was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Commenti 0 condivisioni 22 Views
-
LIFEHACKER.COMWhoop’s Strength Trainer Has Its Flaws, but Is Still Better Than Anything Its Competitors HaveWe may earn a commission from links on this page.Two years ago, screenless fitness tracker Whoop took on a problem that none of its competitors have managed to solve: answering the question “how hard was your weightlifting workout?” Its initial implementation was clunky and finicky. I don’t think I managed to log a single workout correctly. But now, with improvements over the years, it’s become a much more useful feature. The game changer for me? Being able to connect exercises to a workout after you do the workout. This way you can’t mess up the tracking during the workout, but you still get the thing you actually care about—a Strain score accurate enough to power the app’s sleep and recovery recommendations. Read on for more about how to use the Strength Trainer, and what it can and (still) can’t do. What is Whoop’s Strength Trainer? Credit: Beth Skwarecki The Strength Trainer is a way of tracking strength workouts, separate from the way you’d track any other workout with Whoop. It was introduced in 2023, and aims to give you a more appropriate Strain score (reflecting how hard the workout was on your body) compared to tracking it purely by heart rate. To use the Strength Trainer, you need to create (or choose) a workout in the app, telling Whoop exactly what exercises you plan to do, what weight you’re using, and how many sets and reps. You can either have the app follow along with your workout in real time, or connect a workout to an activity after the fact. Why Whoop’s Strength Trainer gives it a huge advantage over other wearables Normally, when you track a workout with Whoop, you simply start an activity, and it measures your heart rate during the activity. This makes perfect sense for cardio activities, like running. The higher your heart rate is, for longer, the higher Strain score you’ll get as a result. A high Strain activity is hard on your body, and requires more recovery. A lower Strain score is easier, maybe even restorative.This approach never worked for strength training, though—and that’s a caveat that applies to tracking strength training with any heart-rate-enabled wearable. Your heart rate graph during a weightlifting session will show lots of resting time, and only brief spikes into higher territory. Those heart rate spikes don’t tell the full story of how hard your muscles were working to lift the weight. That’s why I keep saying to ignore heart rate during weight lifting sessions. Before Whoop introduced the Strength Trainer, my weightlifting sessions would always appear in the app as light workouts, equivalent to an easy run or brisk walk—even if I’d had a killer, heavy workout. But with the feature, strength workouts now show an appropriate amount of Strain. And since Strain scores power your recovery recommendations, that’s kind of important. The Strength Trainer turned Whoop from a wearable that only made sense for endurance athletes into one that makes sense for strength athletes, as well, and everybody in between. The best way to use Whoop’s Strength Trainer is after the fact Adding the details of my strength workout brings it from a 9.2 strain (light) to 13.1 (the upper end of moderate—maybe not accurate, but definitely closer to reality). Credit: Beth Skwarecki Below, I’ll explain how you’re supposed to use the Strength Trainer during workouts. But let me skip to my conclusion: Using it during a workout sucks. Using it after a workout is a stroke of genius by the Whoop team, and gives me everything I really need from this feature. All you do is this: Tap “start activity” and select the activity type as Weightlifting, Powerlifting, Functional Fitness, or Box Fitness. Do your strength workout. End the activity and wait for Whoop to process it. Tap the activity, ignoring its insultingly low Strain score, and tap the box that invites you to connect a strength workout to calculate muscular load. Choose or create a workout that matches what you did. Wait while Whoop re-processes the workout, and enjoy your new, higher Strain score.I keep track of my workouts in a notebook while I do them, so it’s simple for me to fill in the details afterward. You could use an app if you prefer—Hevy is one of my favorites. And yes, you could follow along with the Whoop app, but that’s an experience so frustrating and error-prone that I can’t recommend it. Still, for the sake of being thorough, let’s dig in.How to use Whoop’s Strength Trainer during a workout (and why I don’t)Before you start using the Strength Trainer during a workout, you’ll need to set up a workout with the specific exercises you’d like to do. You’ll also want to fill in the reps and weights of each exercise, if possible. To start the workout, you go to the plus icon in the corner of the app’s home screen, and instead of selecting Start Workout, select Strength Trainer instead. Choose the workout you created, and hit Start Workout from that screen. The app will start a warmup timer, and you can begin your exercises by tapping Start First Set. Ironically, one of the things that makes the Whoop ideal for weightlifting—that you can wear it on a bicep band to keep your wrists free for wraps, straps, or kettlebell movements—is not kosher here. The app asks if you’ll be wearing your Whoop on your left or right wrist. Those are your only choices. (I wear it on my bicep anyway. I don’t know if this affects the results.)To do the workout, you’ll need to tap a button in the app every time you start a set and every time you finish one. This is awkward if you don’t want to have your phone with you, and double awkward if you do want to use your phone for anything during the workout. For example, if I’m videoing a set, I need to start the set, switch apps, start my camera, do the set, stop my camera, switch apps, and stop the set in the Whoop app. Miss a step, and you screw up your workout tracking. During a workout, you can: Add a set Remove the last set of an exercise (but not a specific set in the middle)Reorder exercisesRedo a set (if you started it by accident)Add an exerciseRemove an exerciseChange the weight of an exercise (including one you already did)You cannot: Log a set as having been done in the past (if you did it but forgot to hit the start button)Set a timer to alert you when a certain rest time has passedBeing able to edit the workout on the fly, or undo a set, are great additions that the Strength Trainer didn’t have when it first launched. But there is still no way to address the common problem (for me, anyway) of forgetting to start a set. When I’m filming sets, or using my phone for anything else during the workout—responding to a text, say—I can easily lose track of the Whoop app. I say, “that’s enough texting,” put down the phone, lift my weight, and then return to the phone and realize my mistake. Drives me nuts. It would help if the Strength Trainer could do a live activity on the lock screen, like it does when I go for a run. Unfortunately, live activities for strength training are only available on Android at the moment. (I use an iPhone.) Why the Strength Trainer still disappoints meI still have such mixed feelings about the Strength Trainer. On the pro side: It does give me an appropriate Strain score for my weightlifting, and adding the workout after the fact is convenient and doesn’t mess up my workout. (I wish there were a push notification so I couldn’t forget, but as long as I remember, it’s all good.) No other wearable does anything like this; they all track the effects of strength training as if it were a type of cardio. But the follow-along version is high-maintenance, like babysitting a toddler during your workout. I’m always making mistakes that there isn’t an easy way to fix. It also doesn’t want me to use my bicep band (sorry, but I can’t use a wrist device for some of my exercises). There’s also no way to enter paused exercises (like a squat where you count to three before standing up) or complexes (like clean + front squat + jerk as one rep). These limitations seem to be tied to the Strength Trainer’s origins in Whoop’s 2021 acquisition of Push, a company that tracked strength exercises through a wrist-based velocity sensor. Whoop users were excited to see velocity-based training (VBT) come to Whoop, but that never happened. In a VBT workout, a coach (or app) gauges how fast you were moving—say, how fast you could stand up from a squat—and use that data to tell you whether or not to add weight for your next set. This way, you’d get customized coaching that responds to how you’re actually performing that day. If you’re tired and everything feels heavy, you’d move slower and the app would cue you to use less weight. If you’re feeling great and even heavy weights move fast, the app would have you push yourself a little harder. But Whoop never brought that promise to Whoop users. (If they have plans, they’re still under wraps.) Instead, they seem to have used some of the underlying technology to train their own algorithms to recognize exercises. If you do a squat while using the Strength Trainer, your Whoop device will, presumably, notice when your rep starts and ends, and record how fast you did the squat. What Whoop does with this data is unclear, though. The company’s materials, like the press release from the Strength Trainer’s launch, carefully avoid using the word “velocity” anywhere. Instead, they seem to use “intensity” as a substitute, which only leads to confusion. In traditional strength training, an intense (heavy, hard on your body) rep would show up in VBT as slow movement. But a Whoop spokesperson said on Reddit that they assume you’re working harder when you move a weight fast. Unfortunately, since Whoop is so squirreley in describing its algorithms, it’s really hard to know what it’s doing, or even what you’re missing (if anything) when you log a strength workout after the fact versus following along in the moment. I emailed back and forth with the Whoop team when the Strength Trainer first came out, trying to understand what calculations it was doing and why, but they kept sending me vague statements that explained nothing. There also haven’t been any validation studies that I can find, comparing the results of the Strength Trainer to, well, anything. Whoop now says it “estimat[es] maximum volume from your workout history,” but I don’t know if that’s a change from the initial implementation or not. They also say it “calculates your personal muscular load by taking the highest intensity of each exercise from your profile.” Does that mean the heaviest (using the traditional sense of intensity) or the fastest (using intensity as a euphemism for velocity)? Again, they don’t define their terms.So, I’m disappointed on many levels. I’m disappointed that Whoop seemed to cannibalize a VBT company to provide something that doesn’t even do VBT. I’m disappointed that Whoop doesn’t tell you what the Strength Trainer is even doing in there. I’m disappointed that the Strength Trainer is so hard to use in its most full-featured version, and I’m disappointed that I don’t even know whether I’m missing out by using the more convenient Log Later function. Ironically, the part of the Strength Trainer I use most—logging later—probably never needed any heart rate or velocity tracking at all. Just enter your numbers, and let the algorithm see how much and how heavy you were lifting. Whoop didn’t need to acquire a company or build out a finicky follow-along feature for that. But here we are. If you find it convenient to follow workouts through the app, great. You are luckier than I. But even with the after-the-fact workout logging, Whoop has still managed to address the fact that strength training is harder on your body than a light cardio workout—something that other wearable companies have not figured out how to do.0 Commenti 0 condivisioni 20 Views
-
LIFEHACKER.COMNotion Mail Takes You Back to When Gmail Was GoodNotion Mail is finally out in the wild, for anyone who has a Gmail account. And it's quintessential Notion. If you've used the standard Notion app, you really can't confuse it for anything else.Notion Mail is a minimalist and text-based take on the Mail app that isn't trying to do anything revolutionary. There are no AI summaries, and no complicated split views like in Superhuman. It's just your email, sorted in a way that you like.What does it mean, though, to apply the Notion philosophy to email, and is it good enough for you to make the switch? That is, if you even can. Currently, Notion Mail only works on the Web and on Mac, and it only supports Gmail accounts (leaving out Outlook and enterprise emails). Notion Mail's iOS app is on the way, and the Android app will launch in 2025 as well. But there's no app for Windows on the roadmap.What is Notion?Notion Mail is the latest product from Notion Labs, which is known for its extremely customizable note taking app. Every note in Notion starts with a blank page, but can be customized with blocks, tables, images and more. Some people even turn it into a database, as Notion makes it easy to link one page to another. Notion is free to use for individual users, but charges $10 per month per user for businesses. Plus there's the $10 per month cost for Notion AI, which I'll come back to below. Essentially, Notion Mail aims to take the same minimalist approach of the note-taking app, and apply it to email.Notion users will feel at home Credit: Khamosh Pathak Let's start with how Notion Mail looks and works. It has the same unassuming black and white design that Notion is known for. The buttons are gray, and there are none of the pastel colors or rainbow gradients usually found in AI apps these days. In other words, it feels like Gmail did 15 years ago, but modernized.There's a sidebar that shows all your views, and then a list of email. And that's that, as far as design goes. But because this is Notion, there is also a highly useful command palette (Command+K), so you can compose emails or take actions without leaving your keyboard. Credit: Khamosh Pathak There's support for keyboard shortcuts, too, and native Markdown support, which makes formatting long emails a breeze (and is something that's missing from Gmail and every other major email app).Notion AI is also integrated into the compose box, so you can highlight text and improve your writing, or write an email with a prompt.For integrations, you can set up reminders to remind you to reply to an email, in case you miss it. You can also integrate Notion Calendar to easily display your availability.It's all about AI Auto LabelsNotion isn't rocking the boat with its mail app, but its selling point is the Auto Label feature, which is coupled with the sidebar's Views feature. Let's talk about the Views first.When you first click on Views, you'll be prompted to create feeds for email categories like Promotions, Calendar invites, Updates, and more. You might even be prompted to create custom Views based on your inbox. For instance, the app suggested that I make a view for all my GitHub emails, which is slightly confusing because I'm nowhere near a developer.But you can go in and create a new View at any time. Notion has some templates ready to go from the start, but the easiest way to go about it is to use a prompt and the AI Auto Label feature. Credit: Khamosh Pathak Click the Auto Label button in the top toolbar any time to create a new Label. Here, you'll see a simple text box. Enter any prompt here to create an auto label. For example, you can enter "Emails from Reddit" or "Emails from Grace" to get started. It can help to get a bit granular: The more detailed or specific that you can make it, the better off you'll be. After you enter the prompt, you'll see a toggle switch asking if you want to separate out these emails from the Inbox or not. Notion will also prompt you to "auto label similar" emails as you go about your business.Don't worry: Notion will ask you to approve any labels before applying them. If it's gotten something wrong, you can remove that email, or add in an email that the system overlooked. Credit: Khamosh Pathak After a couple of days of using AI Auto Labels, my experience has been mixed. The first thing to note is that Auto Labels don't go as far back as I would like. So you can't use it to sort out all your invoices from Amazon in the past year in one View. For that, you'd still need to use Gmail search, or another AI like Shortwave. Though, you can create a View for all incoming Amazon emails and invoices, so your future emails will at least be all set. Credit: Khamosh Pathak While Notion Mail is free, you also only get limited access to Notion AI features in the free plan, including the Auto Labels feature. Notion doesn't make it clear what the specific limits are for individuals, but I ran into them pretty quickly after doing my casual testing, where I created 5–6 Auto Labels and tested out Notion AI's writing capabilities. Business limitations are a bit clearer, as Notion says free AI tokens are limited to 500 responses for a workspace. And the more people you add in a workspace, the more Notion raises the free responses limits.As for me, my Notion AI trial ended after just 10 or so responses. And once that happens, you'll have to either wait for the next month to get more free AI credits, or pay up the $10/month for unlimited usage. When you run out of free Notion AI credits, the Auto Labels feature will stop working, and the button with have a Red icon on it, too. The same goes for AI writing features. A Notion wrapper for GmailNotion Mail can serve as a nice alternative to Gmail users who are frustrated with growing bloat, or having to dodge Gemini sidebars. For these users, the minimalist, test-heavy, keyboard-first, and Markdown supported take on Gmail should serve as a faster and simpler alternative. But when it comes to AI, it's still a developing story. AI writing tools are now pretty standard in almost every email app, so whether Notion will appeal to you depends on how much you like to label your email, and how interested you are in some AI help with that. For something more complex, try Shortwave, which offers free and paid plans and offers some more robust AI inbox integration. It's less minimal, but also far more powerful.0 Commenti 0 condivisioni 21 Views
-
WWW.ENGADGET.COMMax implements $8 extra member charges on all subscription plansMax now requires a fee for extra members who join a plan outside of the household. Each person who joins a subscription plan will cost $8 a head, no matter which access tier the main account holder is on. This type of "extra member" charge is how several streaming services have tried to cut down on password sharing by users. Netflix introduced this approach in 2023 and Disney+ followed suit in 2024. The Warner Bros. Discovery-owned platform has at least temporarily allowed live sports and news content to be viewed for free, which is a nice perk for as long as it lasts. Max last raised its subscription prices in 2024, so hopefully viewers will get a reprieve on any more new costs for the rest of this year. These non-household members will be able to stream Max content from their own accounts on one device at a time, and they'll have access to the same plan benefits such as video quality and downloads. In addition, when an extra member joins a plan, they can import their existing watch list and preferences with Max's new profile transfer option.This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/max-implements-8-extra-member-charges-on-all-subscription-plans-195228707.html?src=rss0 Commenti 0 condivisioni 24 Views