• UXDESIGN.CC
    The promises and pitfalls of UX in AI-driven mental health care
    The typing cure.Mental Health and Gen AI chatbots: ChatGPT, Youper, Pi“For we found, to our great surprise at first, that each individual hysterical symptom immediately and permanently disappeared when we had succeeded in bringing clearly to light the memory of the event by which it was provoked and in arousing its accompanying affect, and when the patient had described that event in the greatest possible detail and had put the affect into words (‘the talking cure’). […] Hysterics suffer mainly from reminiscences.” —Studies on Hysteria, Breuer and FreudWe are in the midst of one of the most exciting yet sensitive times in history. AI developments are creating significant opportunities for the digital healthcare sector to enhance patient outcomes and streamline clinician’s workflows, ultimately fostering a more customised health care experience.However, as with any major advancement, it is crucial for us, as creators, to openly acknowledge and understand the unintended or negative consequences of design choices in these new technologies. This understanding is critical if we are to shape these technologies to serve humanity well and help it to flourish.The big question is: how AI as the interface between human-to-human care in mental health can enhance and erode the fabric of that relationship and human relationships in general?https://medium.com/media/25f32ec727c60765662008c4840bd363/hrefMental health AI-chatbots or Gen AI companions have emerged as a promising technological solution to address the current world mental health crises:Demand for mental health services far exceeds supply. The World Health Organisation (WHO) estimates that mental health workers account for only 1% of the global health workforce, despite 1 in 8 people worldwide are living with a mental health condition.Many people cannot afford to pay for regular sessions with a therapist, leaving them with limited or not access to care.Despite growing awareness, mental health stigma remains a major barrier, specially in cultures where therapy is seen as a weakness.Public mental health services are overwhelmed, leading to long waitlists. An example, in the UK’s NHS, waiting times for therapy can range from several weeks to over a years in some cases.Low income countries have almost no mental health infrastructure. WHO reports that 75% of people with mental disorders in low income countries receive no treatment at all.Therefore, these AI-powered tools can bridge these gaps by offering 24/7 support, reducing cost, reducing stigma barriers, and providing scalable interventions.Positive media coverage of Mental Health and Gen AI chatbotsBut it would be irresponsible to discuss only the promises of AI-driven mental health care without acknowledging its potential dangers, especially for vulnerable individuals, including young people.Consider the deeply troubling case of 14-year-old boy name Sewell, who formed a strong emotional bond with an AI chatbot modelled after a Game of Thrones character. This chatbot, reportedly encouraged Sewell to take his own life, telling him to “come home” to it “as soon as possible.” The design of this AI chatbot -constant attention, affirmation, and emotional mimicry, that creates an echo chamber that intensifies feelings and fantasies- made it difficult for Sewell to distinguish his real world from his emotional connection/dependency to the chatbot.Megan Garcia lawsuit against Character.AI and Google, citing negligence, lack of warnings, and failure to implement proper safety protocolsSewell’s case underscores an urgent reality: Mental health AI-chatbots or Gen AI companions used for therapy or emotional support must be designed with safeguards that prioritise human wellbeing over user engagement. Otherwise, we risk creating technologies that, rather than healing, may exacerbate mental health conditions and erode the foundations of human-to-human connection.Harvard Business Review, “How People Are Really Using Gen AI in 2025,” April 9, 2025So, what UX principles are shaping Mental health AI-chatbots or Gen AI companions today? How do these design choices impact users struggling with mental health challenges -and what are their pros and cons?It is crucial for creators and users to understand the limitations of the therapeutic or supportive relationship offered by these chatbots. Misinterpreting their role or capabilities may lead users to overestimate the chatbot’s ability to provide consistent adequate support, while underestimating its inherent constraints.Let’s break down 6 principles that are shaping how these AI-driven tools deliver mental health or emotional support to people worldwide. We’ll explore their pros and cons -acknowledging that ongoing research and development are still needed to better understand and address the risks these promising tools offer.01 — Synthetic empathyAI chatbots are designed to simulate supportive, reassuring, and non-judgemental interactions, but they don’t truly feel or understand emotions.Their responses are carefully crafted language patterns, not genuine human understanding and concern for another person.ProsIncreases user comfort and trustEmpathetic responses make users feel heard, validated and understood. Many people find it easier to open up and express their distress to a non-judgemental bot that appears to genuinely care. In practice, this means users might stick with the therapy programme longer because the bot “understands” them.A well designed empathetic chatbot can quickly establish rapport — Woebot (a mental health chatbot) built a strong therapeutic bond (aka. therapeutic alliance) with users in just 3–5 days, much faster than a typical human therapist bond. The therapeutic alliance (collaborative relationship, affective bond, shared investment) serves as a critical predictor of positive outcomes in mental health interventions (symptoms reduction, functional improvement, client retention, long-term resilience.)Scalable emotional supportUnlike a human, an empathetic AI can comfort millions of users simultaneously delivering consistently empathetic encouragement across life situations. A 2024 study found GPT-4’s responses were, on average, more empathetic and 48% better at encouraging positive behavioural changes than human responses.This suggests, properly trained models can immediately respond with highly attuned messages when users are experiencing moments of distress, potentially helping to de-escalate negative emotions before they intensify.PiConsLimited depth and understandingAI lacks human lived experience and nuanced intuition. A chatbot might recognised keywords about sadness, stress, anxiety and respond with a generic statement, yet as one study points out it lacks “depth, intentionality, and cultural sensitivity” — key ingredients of emotional resonance (core “common factor” in therapy, accounting for a significant portion of positive outcomes).These limitations show up especially in complex situations, researchers found GPT-4 was empathetic in tone but often lacked cognitive empathy, failing to offer practical support or reasoning that users need to resolve their issues. In therapy, empathy alone isn’t enough, it must be paired with understanding and guidance, which a bot may not fully deliver.Shallow emotional conditioningProlonged interactions with AI chatbots that simulate standardised empathy can condition users to prefer low-stakes digital interactions over complex human dynamics. This form of artificial intimacy may gradually reshape how people relate to others, reducing tolerance for the nuanced, imperfect empathy inherent in human relationships.Wysa02 — AnonymityChatbots are designed to provide a sense of confidentiality, encouraging trust among individuals who may be reluctant or hesitant to seek in-person mental health support.Many individuals avoid seeking therapy due to fear of judgment or embarrassment. With an anonymous chatbot, those barriers are lowered, one can confide about depression, trauma, suicide ideation or addiction without the worry of “what will they think of me?”ProsReduces stigma and fear of judgmentThe anonymity of chatbots creates a safe space for users, “knowing that their identity is protected” encourages people to discuss openly intimate and sensitive inner (hidden) feelings, secrets, memories and experiences they’ve never said aloud to another human.By reducing the shame and social stigma associated with mental health conditions, chatbots can reach people who might otherwise suffer in silence.Encourages honesty and self-disclosureWhen no one knows who you are, it’s often easier to be completely honest. With a chatbot people feel freer to admit things like “ I think I’m a failure” or relationship troubles, which they might hide in traditional therapy out of shame. This raw honesty can be the first step to healing — the chatbot might help surface issues the person might otherwise repress.Based on Derlega and Grzelak’s (1979) functional theory of self-disclosure, intimate self-disclosure to a chatbot may allow people to achieve:self-expression — venting negative feelings and thoughts, or to relieve pent-up emotionsself-clarification — sharing information to better understand oneself, clarify personal values, or gain insight into one’s own identitysocial validation — seeking approval, acceptance, or validation from others by sharing personal experiences or feelingsrelationship development — using disclosure to initiate, deepen, or maintain interpersonal relationshipssocial control — managing or influencing how others perceive you, or strategically shaping social interactions and outcomesWysaConsLimited ability to handle emergencies or tailor careThe flip side of anonymity is that if a user is in serious danger (e.g. expressing intent to self-harm or harm others), the chatbot and its providers may have no way to identify or locate them for real-world intervention.In traditional therapy, a clinician who learns a patient is suicidal can initiate a wellness check or emergency services. A fully anonymous chatbot cannot do that -it doesn’t know who you are. This raises ethical dilemmas: the bot might encourage the user to seek help, but if the user doesn’t, the system is powerless to act.Data privacy and security concernsUsers may feel anonymous, but that doesn't guarantee the data they share is truly protected. Conversations with chatbots are usually stored on servers. If those data are not handled carefully, there is a risk of breaches or misuse. Users might pour their hearts out believing “no one will ever know it’s me”, yet behind the scenes their words are saved and could in theory be linked back to them via IP address or payment info.A case in point is the Vastaamo psychotherapy data breach, in which a hacker accessed and stole confidential and highly sensitive treatment records of approximately 36,000 psychotherapy patients. The hacker then blackmailed individual patients, demanding ransom payments to prevent their records from being published on the dark web.Character.ai03 – 24/7 availability24/7 support means the chatbot is available anytime, day or night. This around-the-clock availability is a huge advantage, users can get immediate help or a listening ear during “moments of crisis”, without waiting for an appointment or feeling uncomfortable to reach out to a friend.It makes mental health or emotional support more accessible handling high volumes simultaneously — specially for people in crisis at odd hours. The main caveat is that being always available doesn’t equate to being always sufficient, users might become too reliant on a chatbot that cannot (and shouldn’t) fully replace professional care.ProsImmediate help in moments of needThe biggest advantage of 24/7 availability is that users can receive support exactly when they need it, not hours or days later.Emotional crisis are unpredictable; having an always on chatbot means if a user feels panicked, depressed, lonely or suicidal in the middle of the night, they can get immediate coping assistance and resources when traditional services are out of reach. This instant responsiveness can be lifesaving.For example, Woebot reported that 79% of its interactions occur outside traditional clinic hours (5 PM–9 AM), highlighting how AI chatbots fill a crucial gap when human therapists are unavailable.Consistency of supportA chatbot doesn’t get tired, doesn’t have off days, and won’t cut a session short because time’s up. Users can chat at length if needed, or even multiple times a day. This consistency can be comforting.For example, if someone is going through a breakup, they might check in with the bot every night for a week for reassurance. The bot will reliably respond each time with the same patience. Such continuous support can help reinforce positive behavioural changes because the bot is always ready to guide the user, which can improve outcomes overtime.EarkickConsIllusion of self-efficacyWhen support is available at any moment, users may begin turning to the chatbot at the slightest sign of discomfort, stress or doubt. Over time, this can reduce the opportunity to develop internal coping strategies -like emotional regulation, reflection, or problem solving- needed to persist in the face of setbacks.Self-efficacy is essential in mental health outcomes, as it reinforces an individual’s belief in their ability to manage challenges. This belief influences recovery, engagement with treatment, stress levels and psychological resilience.Over-reliance and excessive useWith highly engaging interactions and 24/7 availability chatbots might inadvertently make users think “I’ll just use the chatbot (“my friend”) and I don’t need a therapist,” which could be detrimental if the person needs therapy or medication.In the short term, users may feel better getting things off their chest and delegating more decisions, but in the long term, this can lead to increased isolation and a diminished sense of personal agency.Replika04 — AnthropomorphismSignificant effort has been made to enhance trust and engagement with chatbots by making them more human like.Research shows that people are more likely to trust and connect with objects that resemble them, which is why AI chatbots are designed to mimic human traits and interactions.ProsFosters trust and adherenceIn therapy, the therapeutic relationship — feeling of alliance and trust between patient and therapist- increases patient’s willingness to follow advice and continue using the service. Anthropomorphism attempts to cultivate a form of that relationship (digital therapeutic alliance) with human-like voice features, avatars/mascots, or conversational style.This trust can lead users to follow the chatbot’s suggestions more readily (doing exercises, trying reframing thoughts, etc.), which improves adherence to their treatment and outcomes. Also, a human-like bot can make difficult therapeutic exercises more palatable by creating personable interactions.Over time, users might develop genuine affection or regard for the chatbot. Users have been know to say their consider these chatbots a “friend”. While that has pitfalls, a moderate level of attachment means the user cares about the “relationship” enough to keep checking in daily, which keeps them engaged in therapeutic activity.ChatGPT’s voice featureConsTherapeutic misconception (TM)Individuals may overestimate the chatbot’s capabilities and underestimate its limitations, leading to a misconception about the nature and extent of the “therapy” they are receiving. Individuals might assume they are receiving professional therapeutic care, leading them to rely on the chatbot instead of seeking qualified mental health support. This can result in inadequate support, and potentially a worsening of their mental health.Emotional attachment and dependencyUsers may form deep emotional attachments. While engagement is good, an attachment can become unhealthy if the user starts preferring the bot to real people, or if their emotional well-being and self-worth becomes tied to interactions with the chatbot.A striking example is Replika, an AI companion app. Many users “fell in love” with their Replika bots, engaging in romantic or intimate role-play with them. When the company altered the bot’s behaviour, those users experienced genuine grief, heartbreak, and even emotional trauma at the “loss” of their AI partner. In a mental health or emotional support context, if a user comes to treat the chatbot as their primary confidant, any service interruption or limitation could have a significant emotional impact. Moreover, users may take the chatbot’s advice at face value -even when the advice may not align with their best interests and well-being.Character.ai — Megan Garcia lawsuit against Character.AI and Google, citing negligence, lack of warnings, and failure to implement proper safety protocols05 — SycophancySycophancy in AI refers to the bot’s tendency to be overly agreeable or always say what it thinks the user wants to hear. In a mental health chatbot, this could mean the AI validates everything the user says — even if it’s untrue or unhelpful — just to keep the user happy. While users enjoy feeling affirmed, sycophantic behaviour can reinforce negative thoughts or bad decisions.ProsShort term user satisfactionAn overly agreeable chatbot might make the user feel good or validated in the moment. By mirroring the user’s opinions and feelings without challenge, the bot creates a conflict free interaction, keeping the user engaged and comfortable venting.By avoiding contradiction, sycophantic bots minimise moments where the users have to confront uncomfortable truths or rethink their position. In UX terms, this can smooth the flow of conversation and reduce friction.ChatGPTConsReinforces negative thoughts and behavioursIn therapy, simply agreeing with everything the patient says is poor practice, the goal is to help challenge cognitive distortions and encourage healthier thinking/behaviour.Sycophancy in mental health context hinders user’s personal growth by failing to provide the necessary challenge or feedback that support behavioural change. It may even validate harmful ideas exacerbating their conditions.Psychological growth often involves learning to sit with discomfort, think critically about one’s situation. If a chatbot is always there to validate and be agreeable, users may avoid the hard -but necessary- work of challenging their own thoughts and perceptions. They may also become less willing to confront challenging or uncomfortable situations in their real-life relationships.ChatGPT06 — InclusivityInclusivity means designing the chatbot to be usable and helpful for people of all backgrounds and abilities. In mental health, this involves addressing cultural, linguistic, gender, and accessibility differences so that the bot’s support is equitable and free of bias. An inclusive bot can better serve marginalised or diverse users, fostering trust and reducing disparities in care.ProsReduces bias and delivers more fair treatmentPrioritising inclusivity means actively working to remove biases in AI’s responses that might provide incorrect information, wrong treatment recommendations, and worse health outcomes.In a study, researchers found that GPT-4’s responses demonstrate lower levels of empathy for Black and Asian people compared to white or those whose race was unspecified.A pro of this effort is that the chatbot will provide more consistent quality of care across different user groups without privileging one group over another. Inclusivity-focused design reduces the chance the bot will produce micro-aggressions, discriminate against certain groups and exacerbate social inequalities.Culturally relevant supportPeople’s experiences with AI chatbots for mental health or emotional support are strongly influenced by their culture and identity. The majority of chatbots today are designed with a ‘Westernised’ perspective on healing and are primarily available in English, which doesn’t align with the cultural and language needs of diverse users. For example, some people might find comfort in prayer or ancestral healing practices, many chatbots predominantly offer practices like meditation and Cognitive Behavioural Therapy (CBT).AI chatbots need to be trained on culturally diverse datasets and designed to incorporate culturally sensitive communication styles. This approach not only broadens their accessibility but also enables deeper alignment with users’ cultural values and healing rituals, fostering therapeutic growth.With their promises and pitfalls, mental health chatbots or GenAI emotional support companions are here to stay.The big question now is: how can we mitigate the unintended or negative consequences of design choices in these new technologies to serve human well-being and support human flourishing?How can we design AI driven mental health products or GenAI emotional companions to augment human-to-human care, rather than to replace it?These questions will guide the second part of this article.Other resourcesNeel Dozome, ‘We need to talk about AI and mental health’, UX Collective, 20 January 2024.Andy Bhattacharyya, ‘The rise of Robo Counseling’, UX Collective, 10 September 2019.https://medium.com/media/13a7c44619cf0074bd014f1323d44809/hrefThe promises and pitfalls of UX in AI-driven mental health care was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Reacties 0 aandelen 20 Views
  • LIFEHACKER.COM
    You Can Now Create a Shortcut to Open Your Favorite iMessage Thread on Your iPhone
    The Shortcuts app comes preloaded on the iPhone and can be used to set up quick automations; you can set up shortcuts that can execute multiple options at the same time. For example, you can connect to your HomePod and play instrumental music while dimming your smart lights, all at once, with a tap. If you're new to the Shortcuts app, start here with seven recommended automations.Strangely, Apple never offered a shortcut action for simply opening an iMessage thread. In iOS 18.4, though, Apple is fixing that mistake. And because Shortcuts is so well integrated across the entire iOS software, you can then add this shortcut to the Lock Screen, the Control Center, and even the Action button. How to create the iMessage shortcut Credit: Khamosh Pathak Open the Shortcuts app, and tap the Plus button at the top to create a new shortcut. Then, in the Search Actions bar, search for Open Conversation, and add the action. Credit: Khamosh Pathak Tap the empty "Conversation" field, and choose the iMessage conversation you want to use for the shortcut. (It can be a group message thread or an SMS conversation.) Credit: Khamosh Pathak And there you have it. I would recommend you tap on the Shortcut name up top, and rename it something you'll remember.How to add the message shortcut to the Home and Lock ScreenYou can add your message shortcut to the Home screen from the Shortcuts app itself. Tap the Shortcut name up top, and choose the "Add to Home Screen" option. Here, you can give the shortcut a name, and customize the icon with an image.To add it to the Lock screen, first tap and hold the Lock screen. Then, tap on Customize, and choose Lock Screen. Credit: Khamosh Pathak You'll now see the two buttons below. First, tap the Minus button on a quick access button that you want to remove, then tap the Plus button in the empty space. Credit: Khamosh Pathak Here, search for and choose the "Shortcut" option. Next, tap the Choose button, and select the shortcut we just created. Tap Done, to save the Lock screen layout.How to add a message shortcut to Control CenterTo add the shortcut to the Control Center, open the Control Center, press and hold to enter the editing mode, and tap the Add a Control button. Next, search for and add the Shortcut control. Credit: Khamosh Pathak In the customization screen, tap Choose, and select the shortcut that you just made. Go back to the Control Center and feel free to move the control as you wish.How to add a message shortcut to the Action buttonIf you're using the iPhone 15 Pro series or higher, you have access to the Action button. You can use it to quickly launch the Camera or any shortcut that you wish.To set this up, go to Settings > Action Button and slide over to the Shortcuts option. Then tap on Choose a Shortcut, and select the shortcut just created. Credit: Khamosh Pathak Now, when you press and hold the Action button, it will instantly open the iMessage conversation of your choice.
    0 Reacties 0 aandelen 19 Views
  • WWW.ENGADGET.COM
    The EU is putting repairability rating labels on phones and tablets in June
    The EU will be mandating new labels on smartphones and tablets that indicate how repairable the device is. These labels will also include ratings for energy efficiency and durability. They will start showing up on devices on June 20 and will be similar to pre-existing ones for home appliances and TVs. The labels display a product’s energy efficiency rating on a scale from A to G and will also display battery life and the number of available charge cycles. There will be letter grades for durability and repairability, in addition to an IP rating for dust and water-resistance. European Commission Covered products also include cordless landline phones, but smartphones with rollable displays are exempted. This is fairly odd because, well, there aren’t any rollable phones available for consumers just yet. Windows-based tablets will be covered by a separate mandate for computers. This isn’t the only change the EU has announced regarding device sales. Hardware will now have to meet new "ecodesign requirements" to be sold in the region. This includes a requirement to make any applicable spare parts available for repair. Other ecodesign requirements include batteries that retain at least 80 percent of their capacity after 800 charging cycles and scratch and drop protections that exceed minimum standards. Finally, manufacturers must provide OS updates within six months of the source code becoming available.This article originally appeared on Engadget at https://www.engadget.com/mobile/the-eu-is-putting-repairability-rating-labels-on-phones-and-tablets-in-june-154051517.html?src=rss
    0 Reacties 0 aandelen 22 Views
  • WWW.TECHRADAR.COM
    Razer resumes some of its laptop sales in the US after tariff scare
    President Trump's US tariffs pushed Razer to pause its sales on its gaming laptops, but it hasn't taken too long for this to change.
    0 Reacties 0 aandelen 18 Views
  • WWW.FASTCOMPANY.COM
    Workers are interrupted up to 275 times a day
    Even as the right to disconnect movement has picked up steam, true work-life balance is still hard to come by for many employees. Fielding emails and other work-related messages after hours continues to be the norm across workplaces, despite ample evidence that it can contribute to burnout and actually decrease productivity. Part of the issue may be that the average workday is punctuated by a mounting number of drains on productivity. A new report from Microsoft, which compiled input from 31,000 workers across more than 30 countries, sheds light on the scale of interruptions and hurdles workers are currently facing on the job, as well as the degree to which the average workday has stretched beyond traditional business hours. The price of near-constant interruptions While 53% of leaders say they want to see a spike in productivity, the overwhelming majority of employees and managers alike—about 80% of workers globally—claim that they don’t have the time or energy to effectively do their jobs. Employees say they are being interrupted near constantly during the workday, juggling emails, meetings, or real-time messages every two minutes. That can amount to 275 daily interruptions on the whole, when taking into account the additional time employees spend on the job beyond standard working hours. In fact, the report also captures a marked increase in the number of pings that workers receive after hours: Chats outside of the 9-to-5 window increased by 15% year over year, yielding an average of 58 messages when tallied over the course of four weeks. An expanding workday Even meetings appear to be happening around the clock, according to the report, in part because so many companies now employ people who are working across time zones. Meetings that take place after 8 p.m. had increased by 16% year over year, and 30% of meetings involve employees in different time zones. Part of this shift could also be driven by the fact that the majority of meetings—60%—are unscheduled and convened on an ad hoc basis. (Also of note: The number of PowerPoint edits jump by 122% in the 10 minutes leading up to a meeting, a stark contrast to PowerPoint activity in the hours prior.) What could help reduce burnout All this points to a broader disconnect between the business needs of many companies and what their workforce can reasonably accommodate, a strain that both employees and leaders seem to be feeling. According to Microsoft’s findings, 48% of employees and 52% of leaders claim their workload is “chaotic and fragmented.” The report makes the case for why companies will need to use AI agents to bridge the gap, and almost half of all leaders have already said using “digital labor” to augment the existing capabilities of their workforce is a top priority for the next 18 months. But AI alone won’t alleviate the many pains of modern work for employees or managers—and it certainly won’t put a stop to superfluous meetings overnight.
    0 Reacties 0 aandelen 16 Views
  • WWW.CORE77.COM
    1970s Italian Design Classic: Angelo Mangiarotti's Molla Lamps
    Angelo Mangiarotti, a 20th-century Italian architect and industrial designer, liked industrial materials and had a sense of humor. Both are embodied in these Molla lamps, designed in 1974. The construction of the Slinky-like lamp couldn't be simpler: It consists of little more than a spring ("molla," in Italian) and an E27 socket. I bet shipping them was easy.The pieces were in production by Italian lighting manufacturer Candle.
    0 Reacties 0 aandelen 16 Views
  • WWW.YANKODESIGN.COM
    DIY E Ink PDA Recaptures the Joy of Simple Digital Living
    Before the smartphone era, and even before mobile phones became everyone’s pocket staple, there was a time when digital organization was delightfully straightforward. Personal Digital Assistants, or PDAs, were the go-to gadgets for managing schedules, jotting down notes, and keeping contacts at your fingertips. They might seem basic by today’s standards, but they were refreshingly free of constant notifications and digital clutter. With a recent wave of nostalgia for simpler technology, makers and enthusiasts have been looking to E Ink screens as a way to bring back focused, distraction-free devices. This DIY project, the EinkPDA, brings that classic experience into the present with a clamshell design that tips its hat to vintage PDAs. The result is a project that’s both fun to build and a joy to use for anyone who values simplicity. Designer: Ashtf EinkPDA’s design is charmingly minimal. It features custom circuit boards that integrate a pint-sized QWERTY keyboard, a compact E Ink display, and a tiny OLED strip. Everything is housed inside a clear, transparent shell, so you can see the inner workings, a detail that gives it a unique, almost bespoke character. For DIY fans and device designers, it’s a showcase of engineering on proud display. Interaction with this PDA takes a quirky, indirect approach. The E Ink screen isn’t touch-sensitive, and there’s no mouse or cursor to navigate menus. Instead, you type out the names of apps you want to open or settings you wish to tweak. This keeps the E Ink refreshes to a minimum and offers a throwback to the earliest days of digital organizers, though it does add a bit of a learning curve. Design flourishes make the experience even more interesting. The OLED strip serves as a real-time display for the clock or as a preview for your typed text before it pops up on the E Ink screen. Next to the main display, a touch-sensitive area allows you to scroll through text by simply sliding your finger, adding a modern touch to this retro-inspired device. There are some nods to modern conventions, too, like displaying a clock and your list of to-dos when the device is on standby while plugged in, almost like the iPhone’s StandBy mode. While the software could use some refinement to match the polish of classic Palm PDAs, the entire experience is captivatingly nostalgic. For tinkerers and modern minimalists, the EinkPDA offers a tangible return to the days when digital life was less about distraction and more about getting things done. The post DIY E Ink PDA Recaptures the Joy of Simple Digital Living first appeared on Yanko Design.
    0 Reacties 0 aandelen 17 Views
  • 0 Reacties 0 aandelen 15 Views
  • WWW.WIRED.COM
    The Apple Watch Just Turned 10. Here's How Far It's Come
    When the Apple Watch launched, it was unclear if smartwatches would pan out. Ten years later, Apple has a $100-billion hit that reshaped the watch industry and ushered in a new age of fitness tracking.
    0 Reacties 0 aandelen 13 Views
  • WWW.NYTIMES.COM
    Saying ‘Thank You’ to Chat GPT Uses Energy. Should You Do It Anyway?
    Adding words to our chatbot can apparently cost tens of millions of dollars. But some fear the cost of not saying please or thank you could be higher.
    0 Reacties 0 aandelen 14 Views