UXDESIGN.CC
User experience without intentional limitations creates chaos
Apply defined constraints in favor of the users well-being.Image by cottonbro studio onPexels.Mad Max is an Australian film franchise, following a police officer on the brink of societal collapse, where the government has no capacity to protect its citizens in a dystopian wasteland. The main character, Max, is rather wary of others, struggling to decide whether to help or go his own way. While we are not living on the brink of a civilization collapse, a bad user experience can make you feel like so. Applying intentional limitations to the user experience can help to reduce bad behavior.Constraints in design are often technical, resource, legal, and time-based limitations, which can play a big role in designing products. Besides maximizing profits, Corporate Social Responsibility (CSR) has been an integral part of company initiatives for a few decades already, where businesses have strategies to make a positive impact on the world and have responsibility towards the society in which theyoperate.The responsibilities are often categorized into environmental, ethical, philanthropic, and economic. CSR can be summarized as three Ps, meaning profit, people, and the planet. Product user responsibility refers to the duties of the person who uses the product, but what about product provider responsibility?Technology companies are addressing issues of cyberbullying and how to better protect their users with tools, guides, and reporting, but more is stillneeded.User and provider responsibilitiesWhen a business is already forging meaningful relationships with customers, is aware of the constraints around design and development, adheres to legalities, has great CSR initiatives, and designs with the user in mind - are there other duties and responsibilities the product team should consider? Employee well-being is often discussed, but how about user well-being?UX designers are walking in the shoes of the persona they are building for, researching the motivations and behavior of the users, but are they also intentionally protecting and supporting whats in the best interests of the users? Yes, but besides talking about accessibility, problem-solving, ease of use, or enjoyability; the more invisible factors that can impact the experience, are the duty or responsibility to design and develop for the users well-being.While we can't know with 100% certainty the real motivations behind one's actions, the provider can still strive to design to protect the user against possible harm, whether physical, mental, financial, or otherwise.These 4 different profile images were created with Adobe Firefly 3. Image by theauthor.Invisible constraintsIf we intentionally apply a constraint to the user experience in favor of the user, would this be perceived as negative or positive? A limitation tends to have a negative connotation, but it is not always the case. While adding a constraint to the kind of images a user can upload as their profile picture can sound like a limiting factor, it is so if we do not elaborate onwhy.AI has advanced in recent years, which is great, but it brings a lot of attention to how to build for security, avoid fraud, and design the experience around the content we interact with. This is not only for detecting illicit content or picking up on certain words to protect the community but also for determining whether something was created byAI.However, its not ethically wrong or breaking the law to use AI-generated content as a profile picture, nor is it to detect AI-generated content and block its usage, so why would itmatter?Implementing such a limitation may not affect the usual Spotify user listening to music, but can make a difference if applied on a platform like LinkedIn, where strangers often interact with each other and exchange sensitive data, sharing a CV in the hopes of employment for example. Context matters, especially when the users data is atplay.A re-creation of a LinkedIn post/comment dialogue. The profile images were created with Adobe Firefly 3, except the authors profile image. None of the content is real. Image by theauthor.AI detecting AI can become hard as technology evolves. Online platforms do have trust and safety measures in place, such as verified identities. Such measures can make users feel more confident and trusting when interacting on the platform. However, it is also easy to surpass these measures.Fraud in the US topped $10 billion in 2023, where one of the most commonly reported categories was imposter scams. Digital tools and platforms are making it even easier. A lot of the data protection acts help users to keep their data private, and not track their activity, but what about the protection around interaction with aproduct?The duty and obligation of the business is to prevent wrongdoings as much as possible. One big challenge can be how or when to approach these kinds of initiatives. Should one wait until there is a lot of negative feedback, or would this be similar to designing for edge cases? Could it be simply part of aiming to create the best possible experience?A combination of manual and automated checks can help to tackle AI misuse through AI authentication methods, such as defined by the Information Technology Industry Council (ITIC) as Labeling:Watermarks: embedding an invisible or visible signal in text or image, with information. This allows the user to know the content was made with AI. Steganography is a technique that hides information inside the least significant bit of a media file forexample.Provenance tracking and metadata authentication: tracing the history, modification, and quality of a dataset. Content provenance, or Content Credentials, binds provenance information to the media at creation or alterations.Human validation of content to verify whether the content was created by AI ornot.Content Credentials by the Coalition for Content Provenance and Authenticity (C2PA). Source: ITIC. Image recreated by theauthor.One technique might not be enough to authenticate AI-generated content and AI authentication is still developing, but informing users when content is generated by AI is a good policy for promoting consumer transparency around AI-generated content.Image by cottonbro studio onPexelsI was contacted by a CEO, looking for a designer to create a mobile app design for their real-estate project. The LinkedIn profile had a legitimate photo, name, bio, 25k followers, and a premium account badge. However, the profile had no activity or way to see the number of connections.The company name was listed on his profile, but the company was not accessible on LinkedIn. After researching on Google, I found the company was making millions in annual revenue a few years before being delisted, yet according to the LinkedIn profile, the company had 2 employees whose profiles had the same convincing characteristics.When I visited the same profile three weeks after contact, I noticed that the follower number had dropped to 20k, and the company information was removed. Information can seem legitimate at first glance, making digital literacy an important part of the Internet.For comparison, this is a screenshot of a playlist on Spotify (2024), using the same generative AI image as the user profile on the LinkedIn example. The importance of the information displayed on profiles is very different.Digital literacy and the responsibility of theproductDigital literacy varies between ages and educational backgrounds. A 2021 Stanford study found that less than 0.1% of 2019 high-school students could identify an original voting video from a fake one, and policies on the education of digital literacy vary across geographies. Digital literacy is not only for the young, but senior citizens are also partaking in the digital transformation, creating a very diverse user group across multiple lifestages.Older adults tend not to be open to learning or using new digital skills due to their younger family members helping to do it for them, the reluctance to engage in social media, or the acceptance of utilizing technology for dailylife.Not everyone is learning or has learned about digital literacy. Would it be the product or service providers responsibility to educate the users or potential users on how to make the best of theirproduct?Image by Vicente Viana Martinez onPexels.According to a Eurobarometer survey, 72% of users want to know how their data is processed on social media platforms, and 63% of EU citizens and residents desire a secure single digital ID for all online services. With EU Digital Identity Wallets, the holder can verify their identity without revealing their full identity or other personal details, such as date ofbirth.User flow of applying for a bank loan with the digital ID wallet. The user selects the required documents and submits them electronically to the bank. Source: The European Commission. Image by theauthor.This ID gives the holder control of the details shared with third parties. The ID wallet initiative is currently being tested in real-life scenarios and is designed to be more secure and user-friendly than a password.Spam filters are an example of setting intentional constraints on the experience. Emails about inheriting millions might be a thing of the past, yet it is still hard to determine for certain if the person you are talking with online is real, made up, both, or neither. Product and service providers are often the experts in their field, which should be part of their duty and responsibility to design, develop, and educate what is best for theuser.Defined ConstraintsIn most modern cars, there will be a constant chime sound if you do not have your seatbelt fastened on a moving vehicle. It is to ensure you fasten a seatbelt to prevent fatal injuries. Phones in Japan are required to make a shutter sound when taking a picture to alert people nearby of a photographic activity, which is part of a privacy protection law against nonconsensual photography.Defined constraints aim to prevent a problem or limit the usage of features in favor of the user and the people aroundthem.Defined limitations are often applied to product plans, such as the limitations between a Design Seat and a Viewer role on Figma. Perhaps limiting the design privileges is sometimes more beneficial and secure than having full access for everyone.Security constraints are applied when the user types their password wrong multiple times and is locked out for an interval before being able to try again, or is further asked to complete 2-step verification.Product plan types and password constraints are not enforced by law, whereas the shutter sound requirement on mobile phones in Japan is. These types of constraints are the responsibility of the business.Image by theauthor.The UK has become the first country to introduce laws on password security under the Product Security and Telecommunications Infrastructure Act. The law ensures every new smart device distributed by a manufacturer, importer, or distributor will require a password to be set upon start, and that passwords can't be default or too weak, such as "password" or admin. This is to help the consumer to protect their data and well-being.Furthermore, providers are required to communicate more transparency around security updates and vulnerabilities. Companies failing to adhere can face up to $12.5 million in fines, recalls, or 4% of their globalrevenue.Besides security and safety, there are also defined limitations on experience. Financial tools, such as using leverage on stock trading, often limit the usage of services based on the experience level of the user. This is not in place to restrict the users freedom, but more in favor of the user to protect them from major financial losses due to their lack of experience, as defined by theproduct.User well-beingIntentional constraints for users do not mean that you are restricting the users freedom, but rather help, support, and guide the users to do whats best for them. Limitations are also part of a company or product strategy.Applying limitations may not solve a user problem directly, but it can help to avoidone.Corporate social responsibility has been making an impact for a long time, and in recent years businesses have become more location-independent, making online well-being perhaps even greater responsibility in thefuture.Dystopia or utopiaregardless of the situation, making the best of the situation we arein.References and furtherreadingMad MaxWhat are Design Constraints?What is Corporate Social Responsibility? 4TypesCyberbullying: What Is It and How to StopItAs Nationwide Fraud Losses Top $10 Billion in 2023, FTC Steps Ups Efforts to Protect thePublicThe Importance of Corporate Social Responsibility in eCommerceAI or Not? How to Detect If An Image is AI-GeneratedAI vs. AI: Can AI Detect AI-Generated ImagesHow Identity Verification Delivers Enhanced Trust and Safety for Digital MarketplacesIs Your Organization Ready to Deal with AI-Generated FakeIDs?Authenticating AI-Generated ContentInformation Technology IndustryCouncilState Approaches to Digital Literacy InstructionDigital Media and Information Literacy for Adults Over 60: Five Insights for Media DevelopmentWhy You Can't Disable the Shutter Sound on JapanesePhonesWhat the UK's New Password Laws Mean for Global CybersecurityEuropean DigitalIdentityEU Digital Wallet Pilot Implementation | Shaping Europe's DigitalFutureContent CredentialsUser experience without intentional limitations creates chaos was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Reacties 0 aandelen 50 Views