Smashing Magazine
Smashing Magazine
Smashing Magazine delivers useful and innovative information to Web designers and developers.
1 أشخاص أعجبو بهذا
123 المنشورات
2 الصور
0 الفيديوهات
0 معاينة
التحديثات الأخيرة
  • Building A Drupal To Storyblok Migration Tool: An Engineering Perspective
    smashingmagazine.com
    This article is a sponsored by StoryblokContent management is evolving. The traditional monolithic CMS approach is giving way to headless architectures, where content management and presentation are decoupled. This shift brings new challenges, particularly when organizations need to migrate from legacy systems to modern headless platforms.Our team encountered this scenario when creating a migration path from Drupal to Storyblok. These systems handle content architecture quite differently Drupal uses an entity-field model integrated with PHP, while Storyblok employs a flexible Stories and Blocks structure designed for headless delivery.If you just need to use a script to do a simple yet extensible content migration from Drupal to Storyblok, I already shared step-by-step instructions on how to download and use it. If youre interested in the process of creating such a script so that you can write your own (possibly) better version, stay here!We observed that developers sometimes struggle with manual content transfers and custom scripts when migrating between CMSs. This led us to develop and share our migration approach, which we implemented as an open-source tool that others could use as a reference for their migration needs.Our solution combines two main components: a custom Drush command that handles content mapping and transformation and a new PHP client for Storybloks Management API that leverages modern language features for improved developer experience.Well explore the engineering decisions behind this tools development, examining our architectural choices and how we addressed real-world migration challenges using modern PHP practices.Note: You can find the complete source code of the migration tool in the Drupal exporter repo.Planning The Migration ArchitectureThe journey from Drupal to Storyblok presents unique architectural challenges. The fundamental difference lies in how these systems conceptualize content: Drupal structures content as entities with fields, while Storyblok uses a component-based approach with Stories and Blocks.Initial Requirements AnalysisA successful migration tool needs to understand both systems intimately. Drupals content model relies heavily on its Entity API, storing content as structured field collections within entities. A typical Drupal article might contain fields for the title, body content, images, and taxonomies. Storyblok, on the other hand, structures content as stories that contain blocks, reusable components that can be nested and arranged in a flexible way. Its a subtle difference that shaped our technical requirements, particularly around content mapping and data transformation, but ultimately, its easy to see the relationships between the two content models.Technical ConstraintsEarly in development, we identified several key constraints. Storybloks Management API enforces rate limits that affect how quickly we can transfer content. Media assets must first be uploaded and then linked. Error recovery becomes essential when migrating hundreds of pieces of content.The brand-new Management API PHP client handles these constraints through built-in retry mechanisms and response validation, so in writing a migration script, we dont need to worry about them.Tool SelectionWe chose Drush as our command-line interface for several reasons. First, its deeply integrated with Drupals bootstrap process, providing direct access to the Entity API and field data. Second, Drupal developers are already familiar with its conventions, making our tool more accessible.The decision to develop a new Management API client came from our experience with the evolution of PHP since we developed the first PHP client, and our goal to provide developers with a dedicated tool for this specific API that offered an improved DX and a tailored set of features.This groundwork shaped how we approached the migration workflow.The Building Blocks: A New Management API ClientA content migration tool interacts heavily with Storybloks Management API &mdash, creating stories, uploading assets, and managing tags. Each operation needs to be reliable and predictable. Our brand-new client simplifies these interactions through intuitive method calls: The client handles authentication, request formatting, and response parsing behind the scenes, letting devs focus on content operations rather than API mechanics.Design For ReliabilityContent migrations often involve hundreds of API calls. Our client includes built-in mechanisms for handling common scenarios like rate limiting and failed requests. The response handling pattern provides clear feedback about operation success: A logger can be injected into the client class, as we did using the Drush logger in our migration script from Drupal.Improving The Development ExperienceBeyond basic API operations, the client reduces cognitive load through predictable patterns. Data objects provide a structured way to prepare content for Storyblok: This pattern validates data early in the process, catching potential issues before they reach the API.Designing The Migration WorkflowMoving from Drupals entity-based structure to Storybloks component model required careful planning of the migration workflow. Our goal was to create a process that would be both reliable and adaptable to different content structures.Command StructureThe migration leverages Drupals entity query system to extract content systematically. By default, access checks were disabled (a reversible business decision) to focus solely on migrating published nodes.Key Steps And InsightsText Fields Required minimal effort: values like value() mapped directly to Storyblok fields. Rich text posed no encoding challenges, enabling straightforward 1:1 transfers.Handling Images Upload: Assets were sent to an AWS S3 bucket.Link: Storybloks Asset API upload() method returned an object_id, simplifying field mapping.Assign: The asset ID and filename were attached to the story.Managing TagsTags extracted from Drupal were pre-created via Storybloks Tag API (optional but ensures consistency).When assigning tags to stories, Storyblok automatically creates missing ones, streamlining the process.Why Staged Workflows MatterThe migration avoids broken references by prioritizing dependencies (assets first, tags next, content last). While pre-creating tags add control, teams can adapt this logicfor example, letting Storyblok auto-generate tags to save time.Flexibility is key: every decision (access checks, tag workflows) can be adjusted to align with project goals.Real-World Implementation ChallengesMigrating content between Drupal and Storyblok presents challenges that you, as the implementer, may encounter.For example, when dealing with large datasets, you may find that Drupal sites with thousands of nodes can quickly hit the rate limits enforced by Storybloks management API. In such cases, a batching mechanism for your requests is worth considering. Instead of processing every node at once, you can process a subset of records, wait for a short period of time, and then continue.Alternatively, you could use the createBulk method of the Story API in the Management API, which allows you to handle multiple story creations with built-in rate limit handling and retries. Another potential hurdle is the conversion of complex field types, especially when Drupals nested structures or Paragraph fields need to be mapped to Storybloks more flexible block-based model.One approach is first to analyze the nesting depth and structure of the Drupal content, then flatten deeply nested elements into reusable Storyblok components while maintaining the correct hierarchy. For example, a paragraph field with embedded media and text can be split into blocks within Storyblok, with each component representing a logical section of content. By structuring data this way before migration, you ensure that content remains editable and properly structured in the new system.Data consistency is another aspect that you need to manage carefully. When migrating hundreds of records, partial failures are always risky. One approach to managing this is to log detailed information for each migration operation and implement a retry mechanism for failed operations.For example, wrapping API calls in a try-catch block and logging errors can be a practical way to ensure that no records are silently dropped. When dealing with fields such as taxonomy terms or tags created on the fly in Storyblok, you may run into duplication issues. A good practice is to perform a check before creating a new tag. This could involve maintaining a local cache of previously created tags and checking against them before sending a create request to the API.The same goes for images; a check could ensure you dont upload the same asset twice.Lessons Learned And Looking ForwardA dedicated API client for Storyblok streamlined interactions, abstracting backend complexity while improving code maintainability. Early use of structured data objects to prepare content proved critical, enabling pre-emptive error detection and reducing API failures.We also ran into some challenges and see room for improvement:Encoding issues in rich text (e.g., HTML entities) were resolved with a pre-processing stepPerformance bottlenecks with large text/images required memory optimization and refined request handlingEnhancements could include support for Drupal Layout Builder, advanced validation layers, or dynamic asset management systems. For deeper dives into our Management API client or migration strategies, reach out via Discord, explore the PHP Client repo, or connect with me on Mastodon. Feedback and contributions are welcome!
    0 التعليقات ·0 المشاركات ·5 مشاهدة
  • How To Argue Against AI-First Research
    smashingmagazine.com
    With AI upon us, companies have recently been turning their attention to synthetic user testing AI-driven research that replaces UX research. There, questions are answered by AI-generated customers, human tasks performed by AI agents.However, its not just for desk research or discovery that AI is used for; its an actual usability testing with AI personas that mimic human behavior of actual customers within the actual product. Its like UX research, just well, without the users.If this sounds worrying, confusing, and outlandish, it is but this doesnt stop companies from adopting AI research to drive business decisions. Although, unsurprisingly, the undertaking can be dangerous, risky, and expensive and usually diminishes user value.This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns with live UX training coming up soon. Free preview.Fast, Cheap, Easy And ImaginaryErika Hall famously noted that design is only as human-centered as the business model allows. If a company is heavily driven by hunches, assumptions, and strong opinions, there will be little to no interest in properly-done UX research in the first place.But unlike UX research, AI research (conveniently called synthetic testing) is fast, cheap, and easy to re-run. It doesnt raise uncomfortable questions, and it doesnt flag wrong assumptions. It doesnt require user recruitment, much time, or long-winded debates.And: it can manage thousands of AI personas at once. By studying AI-generated output, we can discover common journeys, navigation patterns, and common expectations. We can anticipate how people behave and what they would do.Well, thats the big promise. And thats where we start running into big problems.LLMs Are People PleasersGood UX research has roots in what actually happened, not what might have happened or what might happen in the future.By nature, LLMs are trained to provide the most plausible or most likely output based on patterns captured in its training data. These patterns, however, emerge from expected behaviors by statistically average profiles extracted from content on the web. But these people dont exist, they never have.By default, user segments are not scoped and not curated. They dont represent the customer base of any product. So to be useful, we must eloquently prompt AI by explaining who users are, what they do, and how they behave. Otherwise, the output wont match user needs and wont apply to our users.When producing user insights, LLMs cant generate unexpected things beyond what were already asking about.In comparison, researchers are only able to define whats relevant as the process unfolds. In actual user testing, insights can help shift priorities or radically reimagine the problem were trying to solve, as well as potential business outcomes.Real insights come from unexpected behavior, from reading behavioral clues and emotions, from observing a person doing the opposite of what they said. We cant replicate it with LLMs.AI User Research Isnt Better Than NothingPavel Samsonov articulates that things that sound like customers might say them are worthless. But things that customers actually have said, done, or experienced carry inherent value (although they could be exaggerated). We just need to interpret them correctly.AI user research isnt better than nothing or more effective. It creates an illusion of customer experiences that never happened and are at best good guesses but at worst misleading and non-applicable. Relying on AI-generated insights alone isnt much different than reading tea leaves.The Cost Of Mechanical DecisionsWe often hear about the breakthrough of automation and knowledge generation with AI. Yet we often forget that automation often comes at a cost: the cost of mechanical decisions that are typically indiscriminate, favor uniformity, and erode quality.As Maria Rosala and Kate Moran write, the problem with AI research is that it most certainly will be misrepresentative, and without real research, you won't catch and correct those inaccuracies. Making decisions without talking to real customers is dangerous, harmful, and expensive.Beyond that, synthetic testing assumes that people fit in well-defined boxes, which is rarely true. Human behavior is shaped by our experiences, situations, habits that cant be replicated by text generation alone. AI strengthens biases, supports hunches, and amplifies stereotypes.Triangulate Insights Instead Of Verifying ThemOf course AI can provide useful starting points to explore early in the process. But inherently it also invites false impressions and unverified conclusions presented with an incredible level of confidence and certainty.Starting with human research conducted with real customers using a real product is just much more reliable. After doing so, we can still apply AI to see if we perhaps missed something critical in user interviews. AI can enhance but not replace UX research.Also, when we do use AI for desk research, it can be tempting to try to validate AI insights with actual user testing. However, once we plant a seed of insight in our head, its easy to recognize its signs everywhere even if it really isnt there.Instead, we study actual customers, then triangulate data: track clusters or most heavily trafficked parts of the product. It might be that analytics and AI desk research confirm your hypothesis. That would give you a much stronger standing to move forward in the process. Wrapping UpI might sound like a broken record, but I keep wondering why we feel the urgency to replace UX work with automated AI tools. Good design requires a good amount of critical thinking, observation, and planning.To me personally, cleaning up after AI-generated output takes way more time than doing the actual work. There is an incredible value in talking to people who actually use your product.I would always choose one day with a real customer instead of one hour with 1,000 synthetic users pretending to be humans.Useful ResourcesSynthetic Users, by Maria Rosala, Kate MoranSynthetic Users: The Next Revolution in UX Research?, by Carolina GuimaresAI Users Are Neither AI Nor Users, by Debbie LevittPlanning Research with Generative AI, by Maria RosalaSynthetic Testing, by Stphanie Walter, Nikki Anderson, MAThe Dark Side of Synthetic AI Research, by Greg NudelmanNew: How To Measure UX And Design ImpactMeet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure and show your UX impact on business. Use the code IMPACT to save 20% off today. Jump to the details. Video + UX TrainingVideo onlyVideo + UX Training$495.00 $799.00Get Video + UX Training25 video lessons (8h) + Live UX Training.100 days money-back-guarantee.Video only$250.00$395.00Get the video course25 video lessons (8h). Updated yearly.Also available as a UX Bundle with 2 video courses.
    0 التعليقات ·0 المشاركات ·55 مشاهدة
  • Blossoms, Flowers, And The Magic Of Spring (April 2025 Wallpapers Edition)
    smashingmagazine.com
    Starting the new month with a little inspiration boost thats the idea behind our monthly wallpapers series which has been going on for more than fourteen years already. Each month, the wallpapers are created by the community for the community, and everyone who has an idea for a design is welcome to join in experienced designers just like aspiring artists. Of course, it wasnt any different this time around.For this edition, creative folks from all across the globe once again got their ideas flowing to bring some good vibes to your screens. Youll find their wallpapers compiled below, along with a selection of timeless April favorites from our archives that are just too good to be forgotten. A huge thank-you to everyone who shared their designs with us this month youre smashing!If you too would like to get featured in one of our upcoming wallpapers posts, please dont hesitate to submit your design. We cant wait to see what youll come up with! Happy April!You can click on every image to see a larger preview.We respect and carefully consider the ideas and motivation behind each and every artists work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werent anyhow influenced by us but rather designed from scratch by the artists themselves.April Blooms And Easter JoyApril bursts with color, joy, and the magic of new beginnings. As spring awakens, Easter fills the air with wonder bunnies paint playful masterpieces on eggs, and laughter weaves through cherished traditions. Its a season to embrace warmth, kindness, and the simple beauty of blooming days. Designed by LibraFire from Serbia.previewwith calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Walking Among ChimpanzeesIts April, and were heading to Tanzania with Jane Goodall, her chimpanzees, and her reflection that we are all important: Every individual matters. Every individual has a role to play. Every individual makes a difference. Designed by Veronica Valenzuela from Spain.previewwith calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440EggcitedDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x21602001Designed by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Swing Into SpringOur April calendar needs not mark any special occasion April itself is a reason to celebrate. It was a breeze creating this minimal, pastel-colored calendar design with a custom lettering font and plant pattern for the ultimate spring feel. Designed by PopArt Studio from Serbia.preview320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Spring AwakensWe all look forward to the awakening of a life that spreads its wings after a dormant winter and opens its petals to greet us. Long live spring, long live life. Designed by LibraFire from Serbia.preview320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Inspiring BlossomSweet spring is your time is my time is our time for springtime is lovetime and viva sweet love, wrote E. E. Cummings. And we have a question for you: Is there anything more refreshing, reviving, and recharging than nature in blossom? Let it inspire us all to rise up, hold our heads high, and show the world what we are made of. Designed by PopArt Studio from Serbia.preview 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440DreamingThe moment when you just walk and your imagination fills up your mind with thoughts. Designed by Gal Shir from Israel.preview 340x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Clover FieldDesigned by Nathalie Ouederni from France.preview 1024x768, 1280x1024, 1440x900, 1680x1200, 1920x1200, 2560x1440Rainy DayDesigned by Xenia Latii from Berlin, Germany.preview 320x480, 640x480, 800x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440A Time For ReflectionWere all equal before a wave. (Laird Hamilton) Designed by Shawna Armstrong from the United States.preview 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440Purple RainThis month is International Guitar Month! Time to get out your guitar and play. As a graphic designer/illustrator seeing all the variations of guitar shapes begs to be used for a fun design. Search the guitar shapes represented and see if you see one similar to yours, or see if you can identify some of the different styles that some famous guitarists have played (BTW, Princes guitar is in there and purple is just a cool color). Designed by Karen Frolo from the United States.preview 1024x768, 1024x1024, 1280x800, 1280x960, 1280x1024, 1366x768, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Wildest DreamsWe love the art direction, story, and overall cinematography of the Wildest Dreams music video by Taylor Swift. It inspired us to create this illustration. Hope it will look good on your desktops. Designed by Kasra Design from Malaysia.preview 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440SakuraSpring is finally here with its sweet Sakura flowers, which remind me of my trip to Japan. Designed by Laurence Vagner from France.preview 1280x800, 1280x1024, 1680x1050, 1920x1080, 1920x1200, 2560x1440April FoxDesigned by MasterBundles from the United States.preview 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440FairytaleA tribute to Hans Christian Andersen. Happy Birthday! Designed by Roxi Nastase from Romania.preview 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Coffee MorningDesigned by Ricardo Gimenes from Spain.preview 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160The Loneliest House In The WorldMarch 26 was Solitude Day. To celebrate it, here is the picture about the loneliest house in the world. It is a real house, I found it on Youtube. Designed by Vlad Gerasimov from Georgia.preview 800x480, 800x600, 1024x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1440x960, 1600x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3072x1920, 3840x2160, 5120x2880The Perpetual CircleInspired by the Black Forest, which is beginning right behind our office windows, so we can watch the perpetual circle of nature when we take a look outside. Designed by Nils Kunath from Germany.preview 320x480, 640x480, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Ready For AprilIt is very common that it rains in April. This year, I am not sure But whatever we are just prepared! Designed by Vernica Valenzuela from Spain.preview 800x480, 1024x768, 1152x864, 1280x800, 1280x960, 1440x900, 1680x1200, 1920x1080, 2560x1440Happy EasterDesigned by Tazi Design from Australia.preview 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x960, 1600x1200, 1920x1080, 1920x1440, 2560x1440In The RiverSpring is here! Crocodiles search the hot and stay in the river. Designed by Veronica Valenzuela from Spain.preview 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Springtime SageSpring and fresh herbs always feel like they compliment each other. Keeping it light and fresh with this wallpaper welcomes a new season! Designed by Susan Chiang from the United States.preview 320x480, 1024x768, 1280x800, 1280x1024, 1400x900, 1680x1200, 1920x1200, 1920x1440Citrus PassionDesigned by Nathalie Ouederni from France.preview 320x480, 1024x768, 1200x1024, 1440x900, 1600x1200, 1680x1200, 1920x1200, 2560x1440Walking To The WizardWe walked to Oz with our friends. The road is long, but we follow the yellow bricks. Are you coming with us? Designed by Veronica Valenzuela from Spain.preview 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Hello!Designed by Rachel from the United States.preview 640x1136, 1080x1920, 1280x800, 1280x960, 1366x768, 1440x900, 1600x900, 1680x1200, 1920x1080, 1920x1200, 2048x2048, 2560x1440Oceanic WondersCelebrate National Dolphin Day on April 14th by acknowledging the captivating beauty and importance of dolphins in our oceans! Designed by PopArt Studio from Serbia.preview 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Playful AlienEverything would be more fun if a little alien had the controllers. Designed by Maria Keller from Mexico.preview 320x480, 640x480, 640x1136, 750x1334, 800x600, 1024x768, 1024x1024, 1152x864, 1242x2208, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2880x1800Good DaySome pretty flowers and spring time always make for a good day. Designed by Amalia Van Bloom from the United States.preview 640x1136, 1024x768, 1280x800, 1280x1024, 1440x900, 1920x1200, 2560x1440April ShowersDesigned by Ricardo Gimenes from Spain.preview 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440FusionDesigned by Rio Creativo from Poland.preview 1280x800, 1680x1050, 1920x1080, 1920x1200, 2560x1440Do DoodlingDesigned by Design Studio from India.preview 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Ipoh Hor FunMissing my hometowns delicious Kai See Hor Fun (in Cantonese) that literally translates to Shredded Chicken Flat Rice Noodles. It is served in a clear chicken and prawn soup with chicken shreds, prawns, spring onions, and noodles. Designed by Lew Su Ann from Brunei.preview 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440
    0 التعليقات ·0 المشاركات ·49 مشاهدة
  • Adaptive Video Streaming With Dash.js In React
    smashingmagazine.com
    I was recently tasked with creating video reels that needed to be played smoothly under a slow network or on low-end devices. I started with the native HTML5 <video> tag but quickly hit a wall it just doesnt cut it when connections are slow or devices are underpowered.After some research, I found that adaptive bitrate streaming was the solution I needed. But heres the frustrating part: finding a comprehensive, beginner-friendly guide was so difficult. The resources on MDN and other websites were helpful but lacked the end-to-end tutorial I was looking for.Thats why Im writing this article: to provide you with the step-by-step guide I wish I had found. Ill bridge the gap between writing FFmpeg scripts, encoding video files, and implementing the DASH-compatible video player (Dash.js) with code examples you can follow.Going Beyond The Native HTML5 <video> TagYou might be wondering why you cant simply rely on the HTML <video> element. Theres a good reason for that. Lets compare the difference between a native <video> element and adaptive video streaming in browsers.Progressive DownloadWith progressive downloading, your browser downloads the video file linearly from the server over HTTP and starts playback as long as it has buffered enough data. This is the default behavior of the <video> element.<video src="rabbit320.mp4" />When you play the video, check your browsers network tab, and youll see multiple requests with the 206 Partial Content status code.It uses HTTP 206 Range Requests to fetch the video file in chunks. The server sends specific byte ranges of the video to your browser. When you seek, the browser will make more range requests asking for new byte ranges (e.g., Give me bytes 1,000,0002,000,000). In other words, it doesnt fetch the entire file all at once. Instead, it delivers partial byte ranges from the single MP4 video file on demand. This is still considered a progressive download because only a single file is fetched over HTTP there is no bandwidth or quality adaptation.If the server or browser doesnt support range requests, the entire video file will be downloaded in a single request, returning a 200 OK status code. In that case, the video can only begin playing once the entire file has finished downloading. The problems? If youre on a slow connection trying to watch high-resolution video, youll be waiting a long time before playback starts.Adaptive Bitrate StreamingInstead of serving one single video file, adaptive bitrate (ABR) streaming splits the video into multiple segments at different bitrates and resolutions. During playback, the ABR algorithm will automatically select the highest quality segment that can be downloaded in time for smooth playback based on your network connectivity, bandwidth, and other device capabilities. It continues adjusting throughout to adapt to changing conditions.This magic happens through two key browser technologies:Media Source Extension (MSE)It allows passing a MediaSource object to the src attribute in <video>, enabling sending multiple SourceBuffer objects that represent video segments.<video src="blob:https://example.com/6e31fe2a-a0a8-43f9-b415-73dc02985892" />Media Capabilities APIIt provides information on your devices video decoding and encoding abilities, enabling ABR to make informed decisions about which resolution to deliver.Together, they enable the core functionality of ABR, serving video chunks optimized for your specific device limitations in real time.Streaming Protocols: MPEG-DASH Vs. HLSAs mentioned above, to stream media adaptively, a video is split into chunks at different quality levels across various time points. We need to facilitate the process of switching between these segments adaptively in real time. To achieve this, ABR streaming relies on specific protocols. The two most common ABR protocols are:MPEG-DASH,HTTP Live Streaming (HLS).Both of these protocols utilize HTTP to send video files. Hence, they are compatible with HTTP web servers.This article focuses on MPEG-DASH. However, its worth noting that DASH isnt supported by Apple devices or browsers, as mentioned in Muxs article.MPEG-DASHMPEG-DASH enables adaptive streaming through:A Media Presentation Description (MPD) fileThis XML manifest file contains information on how to select and manage streams based on adaptive rules.Segmented Media FilesVideo and audio files are divided into segments at different resolutions and durations using MPEG-DASH-compliant codecs and formats.On the client side, a DASH-compliant video player reads the MPD file and continuously monitors network bandwidth. Based on available bandwidth, the player selects the appropriate bitrate and requests the corresponding video chunk. This process repeats throughout playback, ensuring smooth, optimal quality.Now that you understand the fundamentals, lets build our adaptive video player!Steps To Build an Adaptive Bitrate Streaming Video PlayerHeres the plan:Transcode the MP4 video into audio and video renditions at different resolutions and bitrates with FFmpeg.Generate an MPD file with FFmpeg.Serve the output files from the server.Build the DASH-compatible video player to play the video.Install FFmpegFor macOS users, install FFmpeg using Brew by running the following command in your terminal:brew install ffmpegFor other operating systems, please refer to FFmpegs documentation.Generate Audio RenditionNext, run the following script to extract the audio track and encode it in WebM format for DASH compatibility:ffmpeg -i "input_video.mp4" -vn -acodec libvorbis -ab 128k "audio.webm"-i "input_video.mp4": Specifies the input video file.-vn: Disables the video stream (audio-only output).-acodec libvorbis: Uses the libvorbis codec to encode audio.-ab 128k: Sets the audio bitrate to 128 kbps."audio.webm": Specifies the output audio file in WebM format.Generate Video RenditionsRun this script to create three video renditions with varying resolutions and bitrates. The largest resolution should match the input file size. For example, if the input video is 5761024 at 30 frames per second (fps), the script generates renditions optimized for vertical video playback.ffmpeg -i "input_video.mp4" -c:v libvpx-vp9 -keyint_min 150 -g 150 \-tile-columns 4 -frame-parallel 1 -f webm \-an -vf scale=576:1024 -b:v 1500k "input_video_576x1024_1500k.webm" \-an -vf scale=480:854 -b:v 1000k "input_video_480x854_1000k.webm" \-an -vf scale=360:640 -b:v 750k "input_video_360x640_750k.webm"-c:v libvpx-vp9: Uses the libvpx-vp9 as the VP9 video encoder for WebM.-keyint_min 150 and -g 150: Set a 150-frame keyframe interval (approximately every 5 seconds at 30 fps). This allows bitrate switching every 5 seconds.-tile-columns 4 and -frame-parallel 1: Optimize encoding performance through parallel processing.-f webm: Specifies the output format as WebM.In each rendition:-an: Excludes audio (video-only output).-vf scale=576:1024: Scales the video to a resolution of 576x1024 pixels.-b:v 1500k: Sets the video bitrate to 1500 kbps.WebM is chosen as the output format, as they are smaller in size and optimized yet widely compatible with most web browsers.Generate MPD Manifest FileCombine the video renditions and audio track into a DASH-compliant MPD manifest file by running the following script:ffmpeg \ -f webm_dash_manifest -i "input_video_576x1024_1500k.webm" \ -f webm_dash_manifest -i "input_video_480x854_1000k.webm" \ -f webm_dash_manifest -i "input_video_360x640_750k.webm" \ -f webm_dash_manifest -i "audio.webm" \ -c copy \ -map 0 -map 1 -map 2 -map 3 \ -f webm_dash_manifest \ -adaptation_sets "id=0,streams=0,1,2 id=1,streams=3" \ "input_video_manifest.mpd"-f webm_dash_manifest -i "": Specifies the inputs so that the ASH video player will switch between them dynamically based on network conditions.-map 0 -map 1 -map 2 -map 3: Includes all video (0, 1, 2) and audio (3) in the final manifest.-adaptation_sets: Groups streams into adaptation sets:id=0,streams=0,1,2: Groups the video renditions into a single adaptation set.id=1,streams=3: Assigns the audio track to a separate adaptation set.The resulting MPD file (input_video_manifest.mpd) describes the streams and enables adaptive bitrate streaming in MPEG-DASH.<?xml version="1.0" encoding="UTF-8"?><MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:DASH:schema:MPD:2011" xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011" type="static" mediaPresentationDuration="PT81.166S" minBufferTime="PT1S" profiles="urn:mpeg:dash:profile:webm-on-demand:2012"> <Period id="0" start="PT0S" duration="PT81.166S"> <AdaptationSet id="0" mimeType="video/webm" codecs="vp9" lang="eng" bitstreamSwitching="true" subsegmentAlignment="false" subsegmentStartsWithSAP="1"> <Representation id="0" bandwidth="1647920" width="576" height="1024"> <BaseURL>input_video_576x1024_1500k.webm</BaseURL> <SegmentBase indexRange="16931581-16931910"> <Initialization range="0-645" /> </SegmentBase> </Representation> <Representation id="1" bandwidth="1126977" width="480" height="854"> <BaseURL>input_video_480x854_1000k.webm</BaseURL> <SegmentBase indexRange="11583599-11583986"> <Initialization range="0-645" /> </SegmentBase> </Representation> <Representation id="2" bandwidth="843267" width="360" height="640"> <BaseURL>input_video_360x640_750k.webm</BaseURL> <SegmentBase indexRange="8668326-8668713"> <Initialization range="0-645" /> </SegmentBase> </Representation> </AdaptationSet> <AdaptationSet id="1" mimeType="audio/webm" codecs="vorbis" lang="eng" audioSamplingRate="44100" bitstreamSwitching="true" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="3" bandwidth="89219"> <BaseURL>audio.webm</BaseURL> <SegmentBase indexRange="921727-922055"> <Initialization range="0-4889" /> </SegmentBase> </Representation> </AdaptationSet> </Period></MPD>After completing these steps, youll have:Three video renditions (576x1024, 480x854, 360x640),One audio track, andAn MPD manifest file.input_video.mp4audio.webminput_video_576x1024_1500k.webminput_video_480x854_1000k.webminput_video_360x640_750k.webminput_video_manifest.mpdThe original video input_video.mp4 should also be kept to serve as a fallback video source later.Serve The Output FilesThese output files can now be uploaded to cloud storage (e.g., AWS S3 or Cloudflare R2) for playback. While they can be served directly from a local folder, I highly recommend storing them in cloud storage and leveraging a CDN to cache the assets for better performance. Both AWS and Cloudflare support HTTP range requests out of the box.Building The DASH-Compatible Video Player In ReactTheres nothing like a real-world example to help understand how everything works. There are different ways we can implement a DASH-compatible video player, but Ill focus on an approach using React.First, install the Dash.js npm package by running:npm i dashjsNext, create a component called <DashVideoPlayer /> and initialize the Dash MediaPlayer instance by pointing it to the MPD file when the component mounts. The ref callback function runs upon the component mounting, and within the callback function, playerRef will refer to the actual Dash MediaPlayer instance and be bound with event listeners. We also include the original MP4 URL in the <source> element as a fallback if the browser doesnt support MPEG-DASH.If youre using Next.js app router, remember to add the use client directive to enable client-side hydration, as the video player is only initialized on the client side.Here is the full example:import dashjs from 'dashjs'import { useCallback, useRef } from 'react'export const DashVideoPlayer = () => { const playerRef = useRef() const callbackRef = useCallback((node) => { if (node !== null) { playerRef.current = dashjs.MediaPlayer().create() playerRef.current.initialize(node, "https://example.com/uri/to/input_video_manifest.mpd", false) playerRef.current.on('canPlay', () => { // upon video is playable }) playerRef.current.on('error', (e) => { // handle error }) playerRef.current.on('playbackStarted', () => { // handle playback started }) playerRef.current.on('playbackPaused', () => { // handle playback paused }) playerRef.current.on('playbackWaiting', () => { // handle playback buffering }) } },[]) return ( <video ref={callbackRef} width={310} height={548} controls> <source src="https://example.com/uri/to/input_video.mp4" type="video/mp4" /> Your browser does not support the video tag. </video> )}ResultObserve the changes in the video file when the network connectivity is adjusted from Fast 4G to 3G using Chrome DevTools. It switches from 480p to 360p, showing how the experience is optimized for more or less available bandwidth.ConclusionThats it! We just implemented a working DASH-compatible video player in React to establish a video with adaptive bitrate streaming. Again, the benefits of this are rooted in performance. When we adopt ABR streaming, were requesting the video in smaller chunks, allowing for more immediate playback than wed get if we needed to fully download the video file first. And weve done it in a way that supports multiple versions of the same video, allowing us to serve the best format for the users device.ReferencesHttp Range Request And MP4 Video Play In Browser, Zeng XuSetting up adaptive streaming media sources (Mozilla Developer Network)DASH Adaptive Streaming for HTML video (Mozilla Developer Network)
    0 التعليقات ·0 المشاركات ·56 مشاهدة
  • Previewing Content Changes In Your Work Withdocument.designMode
    smashingmagazine.com
    So, you just deployed a change to your website. Congrats! Everything went according to plan, but now that you look at your work in production, you start questioning your change. Perhaps that change was as simple as a new heading and doesnt seem to fit the space. Maybe you added an image, but it just doesnt feel right in that specific context.What do you do? Do you start deploying more changes? Its not like you need to crack open Illustrator or Figma to mock up a small change like that, but previewing your changes before deploying them would still be helpful.Enter document.designMode. Its not new. In fact, I just recently came across it for the first time and had one of those Wait, this exists? moments because its a tool weve had forever, even in Internet Explorer 6. But for some reason, Im only now hearing about it, and it turns out that many of my colleagues are also hearing about it for the first time.What exactly is document.designMode? Perhaps a little video demonstration can help demonstrate how it allows you to make direct edits to a page.At its simplest, document.designMode makes webpages editable, similar to a text editor. Id say its like having an edit mode for the web one can click anywhere on a webpage to modify existing text, move stuff around, and even delete elements. Its like having Apples Distraction Control feature at your beck and call.I think this is a useful tool for developers, designers, clients, and regular users alike.You might be wondering if this is just like contentEditable because, at a glance, they both look similar. But no, the two serve different purposes. contentEditable is more focused on making a specific element editable, while document.designMode makes the whole page editable.How To Enable document.designMode In DevToolsEnabling document.designMode can be done in the browsers developer tools:Right-click anywhere on a webpage and click Inspect.Click the Console tab.Type document.designMode = "on" and press Enter.To turn it off, refresh the page. Thats it.Another method is to create a bookmark that activates the mode when clicked:Create a new bookmark in your browser.You can name it whatever, e.g., EDIT_MODE.Input this code in the URL field:javascript:(function(){document.designMode = document.designMode === 'on' ? 'off' : 'on';})();And now you have a switch that toggles document.designMode on and off.Use CasesThere are many interesting, creative, and useful ways to use this tool.Basic Content EditingI dare say this is the core purpose of document.designMode, which is essentially editing any text element of a webpage for whatever reason. It could be the headings, paragraphs, or even bullet points. Whatever the case, your browser effectively becomes a What You See Is What You Get (WYSIWYG) editor, where you can make and preview changes on the spot.Landing Page A/B TestingLets say we have a product website with an existing copy, but then you check out your competitors, and their copy looks more appealing. Naturally, youd want to test it out. Instead of editing on the back end or taking notes for later, you can use document.designMode to immediately see how that copy variation would fit into the landing page layout and then easily compare and contrast the two versions.This could also be useful for copywriters or solo developers.SEO Title And Meta DescriptionEveryone wants their website to rank at the top of search results because that means more traffic. However, as broad as SEO is as a practice, the <title> tag and <meta> description is a websites first impression in search results, both for visitors and search engines, as they can make or break the click-through rate.The question that arises is, how do you know if certain text gets cut off in search results? I think document.designMode can fix that before pushing it live.With this tool, I think itd be a lot easier to see how different title lengths look when truncated, whether the keywords are instantly visible, and how compelling itd be compared to other competitors on the same search result.Developer WorkflowsTo be completely honest, developers probably wont want to use document.designMode for actual development work. However, it can still be handy for breaking stuff on a website, moving elements around, repositioning images, deleting UI elements, and undoing what was deleted, all in real time.This could help if youre skeptical about the position of an element or feel a button might do better at the top than at the bottom; document.designMode sure could help. It sure beats rearranging elements in the codebase just to determine if an element positioned differently would look good. But again, most of the time, were developing in a local environment where these things can be done just as effectively, so your mileage may vary as far as how useful you find document.designMode in your development work.Client And Team CollaborationIt is a no-brainer that some clients almost always have last-minute change requests stuff like Can we remove this button? or Lets edit the pricing features in the free tier.To the client, these are just little tweaks, but to you, it could be a hassle to start up your development environment to make those changes. I believe document.designMode can assist in such cases by making those changes in seconds without touching production and sharing screenshots with the client.It could also become useful in team meetings when discussing UI changes. Seeing changes in real-time through screen sharing can help facilitate discussion and lead to quicker conclusions.Live DOM TutorialsFor beginners learning web development, I feel like document.designMode can help provide a first look at how it feels to manipulate a webpage and immediately see the results sort of like a pre-web development stage, even before touching a code editor.As learners experiment with moving things around, an instructor can explain how each change works and affects the flow of the page.Social Media Content PreviewWe can use the same idea to preview social media posts before publishing them! For instance, document.designMode can gauge the effectiveness of different call-to-action phrases or visualize how ad copy would look when users stumble upon it when scrolling through the platform. This would be effective on any social media platform.MemesI didnt think itd be fair not to add this. It might seem out of place, but lets be frank: creating memes is probably one of the first things that comes to mind when anyone discovers document.designMode.You can create parody versions of social posts, tweak article headlines, change product prices, and manipulate YouTube views or Reddit comments, just to name a few of the ways you could meme things. Just remember: this shouldnt be used to spread false information or cause actual harm. Please keep it respectful and ethical!Conclusiondocument.designMode = "on" is one of those delightful browser tricks that can be immediately useful when you discover it for the first time. Its a raw and primitive tool, but you cant deny its utility and purpose.So, give it a try, show it to your colleagues, or even edit this article. You never know when it might be exactly what you need.Further ReadingNew Front-End Features For Designers In 2025, Cosima MielkeUseful DevTools Tips and Tricks, Patrick BrossetUseful CSS Tips And Techniques, Cosima Mielke
    0 التعليقات ·0 المشاركات ·89 مشاهدة
  • Web Components Vs. Framework Components:Whats The Difference?
    smashingmagazine.com
    It might surprise you that a distinction exists regarding the word component, especially in front-end development, where component is often used and associated with front-end frameworks and libraries. A component is a code that encapsulates a specific functionality and presentation. Components in front-end applications have a similar function: building reusable user interfaces. However, their implementations are different.Web or framework-agnostic components are standard web technologies for building reusable, self-sustained HTML elements. They consist of Custom Elements, Shadow DOM, and HTML template elements. On the other hand, framework components are reusable UIs explicitly tailored to the framework in which they are created. Unlike Web Components, which can be used in any framework, framework components are useless outside their frameworks.Some critics question the agnostic nature of Web Components and even go so far as to state that they are not real components because they do not conform to the agreed-upon nature of components. This article comprehensively compares web and framework components, examines the arguments regarding Web Components agnosticism, and considers the performance aspects of Web and framework components.What Makes A Component?Several criteria could be satisfied for a piece of code to be called a component, but only a few are essential:Reusability,Props and data handling,Encapsulation.Reusability is the primary purpose of a component, as it emphasizes the DRY (dont repeat yourself) principle. A component should be designed to be reused in different parts of an application or across multiple applications. Also, a component should be able to accept data (in the form of props) from its parent components and optionally pass data back through callbacks or events. Components are regarded as self-contained units; therefore, they should encapsulate their logic, styles, and state. If theres one thing we are certain of, framework components capture these criteria well, but what about their counterparts, Web Components?Understanding Web ComponentsWeb Components are a set of web APIs that allow developers to create custom, reusable HTML tags that serve a specific function. Based on existing web standards, they permit developers to extend HTML with new elements, custom behaviour, and encapsulated styling.Web Components are built based on three web specifications:Custom Elements,Shadow DOM,HTML templates.Each specification can exist independently, but when combined, they produce a web component. Custom ElementThe Custom Elements API makes provision for defining and using new types of DOM elements that can be reused.// Define a Custom Elementclass MyCustomElement extends HTMLElement { constructor() { super(); } connectedCallback() { this.innerHTML = ` <p>Hello from MyCustomElement!</p> `; }}// Register the Custom ElementcustomElements.define('my-custom-element', MyCustomElement);Shadow DOMThe Shadow DOM has been around since before the concept of web components. Browsers have used a nonstandard version for years for default browser controls that are not regular DOM nodes. It is a part of the DOM that is at least less reachable than typical light DOM elements as far as JavaScript and CSS go. These things are more encapsulated as standalone elements.// Create a Custom Element with Shadow DOMclass MyShadowElement extends HTMLElement { constructor() { super(); this.attachShadow({ mode: 'open' }); } connectedCallback() { this.shadowRoot.innerHTML = ` <style> p { color: green; } </style> <p>Content in Shadow DOM</p> `; }}// Register the Custom ElementcustomElements.define('my-shadow-element', MyShadowElement);HTML TemplatesHTML Templates API enables developers to write markup templates that are not loaded at the start of the app but can be called at runtime with JavaScript. HTML templates define the structure of Custom Elements in Web Components. // my-component.jsexport class MyComponent extends HTMLElement { constructor() { super(); this.attachShadow({ mode: 'open' }); } connectedCallback() { this.shadowRoot.innerHTML = ` <style> p { color: red; } </style> <p>Hello from ES Module!</p> `; }}// Register the Custom ElementcustomElements.define('my-component', MyComponent);<!-- Import the ES Module --><script type="module"> import { MyComponent } from './my-component.js';</script>Web Components are often described as framework-agnostic because they rely on native browser APIs rather than being tied to any specific JavaScript framework or library. This means that Web Components can be used in any web application, regardless of whether it is built with React, Angular, Vue, or even vanilla JavaScript. Due to their supposed framework-agnostic nature, they can be created and integrated into any modern front-end framework and still function with little to no modifications. But are they actually framework-agnostic?The Reality Of Framework-Agnosticism In Web ComponentsFramework-agnosticism is a term describing self-sufficient software an element in this case that can be integrated into any framework with minimal or no modifications and still operate efficiently, as expected.Web Components can be integrated into any framework, but not without changes that can range from minimal to complex, especially the styles and HTML arrangement. Another change Web Components might experience during integration includes additional configuration or polyfills for full browser support. This drawback is why some developers do not consider Web Components to be framework-agnostic. Notwithstanding, besides these configurations and edits, Web Components can easily fit into any front-end framework, including but not limited to React, Angular, and Vue.Framework Components: Strengths And LimitationsFramework components are framework-specific reusable bits of code. They are regarded as the building blocks of the framework on which they are built and possess several benefits over Web Components, including the following:An established ecosystem and community support,Developer-friendly integrations and tools,Comprehensive documentation and resources,Core functionality,Tested code,Fast development,Cross-browser support, andPerformance optimizations.Examples of commonly employed front-end framework elements include React components, Vue components, and Angular directives. React supports a virtual DOM and one-way data binding, which allows for efficient updates and a component-based model. Vue is a lightweight framework with a flexible and easy-to-learn component system. Angular, unlike React, offers a two-way data binding component model with a TypeScript focus. Other front-end framework components include Svelte components, SolidJS components, and more.Framework layer components are designed to operate under a specific JavaScript framework such as React, Vue, or Angular and, therefore, reside almost on top of the framework architecture, APIs, and conventions. For instance, React components use JSX and state management by React, while Angular components leverage Angular template syntax and dependency injection. As far as benefits, it has excellent developer experience performance, but as far as drawbacks are concerned, they are not flexible or reusable outside the framework.In addition, a state known as vendor lock-in is created when developers become so reliant on some framework or library that they are unable to switch to another. This is possible with framework components because they are developed to be operational only in the framework environment.Comparative AnalysisFramework and Web Components have their respective strengths and weaknesses and are appropriate to different scenarios. However, a comparative analysis based on several criteria can help deduce the distinction between both.Encapsulation And Styling: Scoped Vs. IsolatedEncapsulation is a trademark of components, but Web Components and framework components handle it differently. Web Components provide isolated encapsulation with the Shadow DOM, which creates a separate DOM tree that shields a components styles and structure from external manipulation. That ensures a Web Component will look and behave the same wherever it is used. However, this isolation can make it difficult for developers who need to customize styles, as external CSS cannot cross the Shadow DOM without explicit workarounds (e.g., CSS custom properties). Scoped styling is used by most frameworks, which limit CSS to a component using class names, CSS-in-JS, or module systems. While this dissuades styles from leaking outwards, it does not entirely prevent external styles from leaking in, with the possibility of conflicts. Libraries like Vue and Svelte support scoped CSS by default, while React often falls back to libraries like styled-components.Reusability And InteroperabilityWeb Components are better for reusable components that are useful for multiple frameworks or vanilla JavaScript applications. In addition, they are useful when the encapsulation and isolation of styles and behavior must be strict or when you want to leverage native browser APIs without too much reliance on other libraries. Framework components are, however, helpful when you need to leverage some of the features and optimisations provided by the framework (e.g., React reconciliation algorithm, Angular change detection) or take advantage of the mature ecosystem and tools available. You can also use framework components if your team is already familiar with the framework and conventions since it will make your development process easier.Performance ConsiderationsAnother critical factor in determining web vs. framework components is performance. While both can be extremely performant, there are instances where one will be quicker than the other.For Web Components, implementation in the native browser can lead to optimised rendering and reduced overhead, but older browsers may require polyfills, which add to the initial load. While React and Angular provide specific optimisations (e.g., virtual DOM, change detection) that will make performance improvements on high-flow, dynamic applications, they add overhead due to the framework runtime and additional libraries.Developer ExperienceDeveloper experience is another fundamental consideration regarding Web Components versus framework components. Ease of use and learning curve can play a large role in determining development time and manageability. Availability of tooling and community support can influence developer experience, too. Web Components use native browser APIs and, therefore, are comfortable to developers who know HTML, CSS, and JavaScript but have a steeper learning curve due to additional concepts like the Shadow DOM, custom elements, and templates that have a learning curve attached to them. Also, Web Components have a smaller community and less community documentation compared to famous frameworks like React, Angular, and Vue.Side-by-Side Comparison Web Components Benefits Framework Components Benefits Native browser support can lead to efficient rendering and reduced overhead. Frameworks like React and Angular provide specific optimizations (e.g., virtual DOM, change detection) that can improve performance for large, dynamic applications. Smaller bundle sizes and native browser support can lead to faster load times. Frameworks often provide tools for optimizing bundle sizes and lazy loading components. Leverage native browser APIs, making them accessible to developers familiar with HTML, CSS, and JavaScript. Extensive documentation, which makes it easier for developers to get started. Native browser support means fewer dependencies and the potential for better performance. Rich ecosystem with extensive tooling, libraries, and community support. Web Components Drawbacks Framework Components Drawbacks Older browsers may require polyfills, which can add to the initial load time. Framework-specific components can add overhead due to the frameworks runtime and additional libraries. Steeper learning curve due to additional concepts like Shadow DOM and Custom Elements. Requires familiarity with the frameworks conventions and APIs. Smaller ecosystem and fewer community resources compared to popular frameworks. Tied to the framework, making it harder to switch to a different framework. To summarize, the choice between Web Components and framework components depends on the specific need of your project or team, which can include cross-framework reusability, performance, and developer experience. ConclusionWeb Components are the new standard for agnostic, interoperable, and reusable components. Although they need further upgrades and modifications in terms of their base technologies to meet framework components standards, they are entitled to the title components. Through a detailed comparative analysis, weve explored the strengths and weaknesses of Web Components and framework components, gaining insight into their differences. Along the way, we also uncovered useful workarounds for integrating web components into front-end frameworks for those interested in that approach.ReferencesWhat are Web Components? (WebComponents.org)Web Components Specifications (WebComponents.org)Web Components (MDN)Using Shadow DOM (MDN)Web Components Arent Components, Keith J. Grant
    0 التعليقات ·0 المشاركات ·95 مشاهدة
  • How To Prevent WordPress SQL Injection Attacks
    smashingmagazine.com
    Did you know that your WordPress site could be a target for hackers right now? Thats right! Today, WordPress powers over 43% of all websites on the internet. That kind of public news makes WordPress sites a big target for hackers.One of the most harmful ways they attack is through an SQL injection. A SQL injection may break your website, steal data, and destroy your content. More than that, they can lock you out of your website! Sounds scary, right? But dont worry, you can protect your site. That is what this article is about.What Is SQL?SQL stands for Structured Query Language. It is a way to talk to databases, which store and organize a lot of data, such as user details, posts, or comments on a website. SQL helps us ask the database for information or give it new data to store.When writing an SQL query, you ask the database a question or give it a task. For example, if you want to see all users on your site, an SQL query can retrieve that list.SQL is powerful and vital since all WordPress sites use databases to store content.What Is An SQL Injection Attack?WordPress SQL injection attacks try to gain access to your sites database. An SQL injection (SQLi) lets hackers exploit a vulnerable SQL query to run a query they made. The attack occurs when a hacker tricks a database into running harmful SQL commands.Hackers can send these commands via input fields on your site, such as those in login forms or search bars. If the website does not check input carefully, a command can grant access to the database. Imagine a hacker typing an SQL command instead of typing a username. It may fool the database and show private data such as passwords and emails. The attacker could use it to change or delete database data.Your database holds all your user-generated data and content. It stores pages, posts, links, comments, and users. For the bad guys, it is a goldmine of valuable data.SQL injections are dangerous as they let hackers steal data or take control of a website. A WordPress firewall prevents SQL injection attacks. Those attacks can compromise and hack sites very fast.SQL Injections: Three Main TypesThere are three main kinds of SQL injection attacks. Every type works in various ways, but they all try to fool the database. Were going to look at every single type.In-Band SQLiThis is perhaps the most common type of attack. A hacker sends the command and gets the results using the same communication method. It is to make a request and get the answer right away.There are two types of In-band SQLi injection attacks:Error-based SQLi,Union-based SQLi.With error-based SQLi, the hacker causes the database to give an error message. This message may reveal crucial data, such as database structure and settings.What about union-based SQLi attacks? The hacker uses the SQL UNION statement to combine their request with a standard query. It can give them access to other data stored in the database.Inferential SQLiWith inferential SQLi, the hacker will not see the results at once. Instead, they ask for database queries that give yes and no answers. Hackers can reveal the database structure or data by how the site responds.They do that in two common ways:Boolean-based SQLi,Time-based SQLi.Through Boolean-based SQLi, the hacker sends queries that can only be true or false. For example, is this user ID more than 100? This allows hackers to gather more data about the site based on how it reacts.In time-based SQLi, the hacker asks a query that makes the database take longer to reply if the answer is yes. They can figure out what they need to know due to the delay.Out-of-band SQLiOut-of-band SQLi is a less common but equally dangerous type of attack. Hackers use various ways to get results. Usually, they connect the database to a server they control.The hacker does not see the results all at once. However, they can get the data sent somewhere else via email or a network connection. This method applies when the site blocks ordinary SQL injection methods.Why Preventing SQL Injection Is CrucialSQL injections are a giant risk for websites. They can lead to various harms stolen data, website damage, legal issues, loss of trust, and more.Hackers can steal data like usernames, passwords, and emails. They may cause damage by deleting and changing your data. Besides, it messes up your site structure, making it unusable.Is your user data stolen? You might face legal troubles if your site treats sensitive data. People may lose trust in you if they see that your site gets hacked. As a result, the reputation of your site can suffer.Thus, it is so vital to prevent SQL injections before they occur.11 Ways To Prevent WordPress SQL Injection AttacksOK, so we know what SQL is and that WordPress relies on it. We also know that attackers take advantage of SQL vulnerabilities. Ive collected 11 tips for keeping your WordPress site free of SQL injections. The tips limit your vulnerability and secure your site from SQL injection attacks.1. Validate User InputSQL injection attacks usually occur via forms or input fields on your site. It could be inside a login form, a search box, a contact form, or a comment section. Does a hacker enter bad SQL commands into one of these fields? They may fool your site, giving them access to your database by running those commands.Hence, always sanitize and validate all input data on your site. Users should not be able to submit data if it does not follow a specific format. The easiest way to avoid this is to use a plugin like Formidable Forms, an advanced builder for adding forms. That said, WordPress has many built-in functions to sanitize and validate input on your own. It includes sanitize_text_field(), sanitize_email(), and sanitize_url().The validation cleans up user inputs before they get sent to your database. These functions strip out unwanted characters and ensure the data is safe to store.2. Avoid Dynamic SQLDynamic SQL allows you to create SQL statements on the fly at runtime. How does dynamic SQL work compared to static SQL? You can create flexible and general SQL queries adjusted to various conditions. As a result, dynamic SQL is typically slower than static SQL, as it demands runtime parsing.Dynamic SQL can be more vulnerable to SQL injection attacks. It occurs when the bad guy alters a query by injecting evil SQL code. The database may respond and run this harmful code. As a result, the attacker can access data, corrupt it, or even hack your entire database.How do you keep your WordPress site safe? Use prepared statements, stored procedures or parameterized queries.3. Regularly Update WordPress Themes And PluginsKeeping WordPress and all plugins updated is the first step in keeping your site safe. Hackers often look for old software versions with known security issues.There are regular security updates for WordPress, themes, and plugins. They fix security issues. You leave your site open to attacks as you ignore these updates.To stay safe, set up automatic updates for minor WordPress versions. Check for theme and plugin updates often. Only use trusted plugins from the official WordPress source or well-known developers.By updating often, you close many ways hackers could attack.4. Add A WordPress FirewallA firewall is one of the best ways to keep your WordPress website safe. It is a shield for your WordPress site and a security guard that checks all incoming traffic. The firewall decides who can enter your site and who gets blocked.There are five main types of WordPress firewalls:Plugin-based firewalls,Web application firewalls,Cloud-based firewalls,DNS-level firewalls,Application-level firewalls.Plugin-based firewalls you install on your WordPress site. They work from within your website to block the bad traffic. Web application firewalls filter, check and block the traffic to and from a web service. They detect and defend against risky security flaws that are most common in web traffic. Cloud-based firewalls work from outside your site. They block the bad traffic before it even reaches your site. DNS-level firewalls send your site traffic via their cloud proxy servers, only letting them direct real traffic to your web server. Finally, application-level firewalls check the traffic as it reaches your server. That means before loading most of the WordPress scripts.Stable security plugins like Sucuri and Wordfence can also act as firewalls.5. Hide Your WordPress VersionOlder WordPress versions display the WordPress version in the admin footer. Its not always a bad thing to show your version of WordPress. But revealing it does provide virtual ammo to hackers. They want to exploit vulnerabilities in outdated WordPress versions.Are you using an older WordPress version? You can still hide your WordPress version:With a security plugin such as Sucuri or Wordfence to clear the version number orBy adding a little bit of code to your functions.php file.function hide_wordpress_version() { return '';}add_filter('the_generator', 'hide_wordpress_version');This code stops your WordPress version number from showing in the themes header.php file and RSS feeds. It adds a small but helpful layer of security. Thus, it becomes more difficult for hackers to detect.6. Make Custom Database Error NoticesBad guys can see how your database is set up via error notices. Ensure creating a custom database error notice that users see to stop it. Hackers will find it harder to detect weak spots in your site when you hide error details. The site will stay much safer when you show less data on the front end.To do that, copy and paste the code into a new db-error.php file. Jeff Starr has a classic article on the topic from 2009 with an example:<?php // Custom WordPress Database Error Page header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); header('Retry-After: 600'); // 1 hour = 3600 seconds// If you want to send an email to yourself upon an error// mail("your@email.com", "Database Error", "There is a problem with the database!", "From: Db Error Watching");?><!DOCTYPE HTML><html> <head> <title>Database Error</title> <style> body { padding: 50px; background: #04A9EA; color: #fff; font-size: 30px; } .box { display: flex; align-items: center; justify-content: center; } </style></head> <body> <div class="box"> <h1>Something went wrong</h1> </div> </body></html>Now save the file in the root of your /wp-content/ folder for it to take effect.7. Set Access And Permission Limits For User RolesAssign only the permissions that each role demands to do its tasks. For example, Editors may not need access to the WordPress database or plugin settings. Improve site security by giving only the admin role full dashboard access. Limiting access to features for fewer roles reduces the odds of an SQL injection attack.8. Enable Two-factor AuthenticationA great way to protect your WordPress site is to apply two-factor authentication (2FA). Why? Since it adds an extra layer of security to your login page. Even if a hacker cracks your password, they still wont be able to log in without getting access to the 2FA code.Setting up 2FA on WordPress goes like this:Install a two-factor authentication plugin.Google Authenticator by miniOrange, Two-Factor, and WP 2FA by Melapress are good options.Pick your authentication method.The plugins often have three choices: SMS codes, authentication apps, or security keys.Link your account.Are you using Google Authenticator? Start and scan the QR code inside the plugin settings to connect it. If you use SMS, enter your phone number and get codes via text.Test it.Log out of WordPress and try to log in again. First, enter your username and password as always. Second, you complete the 2FA step and type in the code you receive via SMS or email.Enable backup codes (optional).Some plugins let you generate backup codes. Save these in a safe spot in case you lose access to your phone or email.9. Delete All Unneeded Database FunctionsAssure erasing tables you no longer use and delete junk or unapproved comments. Your database will be more resistant to hackers who try to exploit sensitive data.10. Monitor Your Site For Unusual ActivityWatch for unusual activity on your site. You can check for actions like many failed login attempts or strange traffic spikes. Security plugins such as Wordfence or Sucuri alert you when something seems odd. That helps to catch issues before they get worse.11. Backup Your Site RegularlyRunning regular backups is crucial. With a backup, you can quickly restore your site to its original state if it gets hacked. You want to do this anytime you execute a significant update on your site. Also, it regards updating your theme and plugins.Begin to create a plan for your backups so it suits your needs. For example, if you publish new content every day, then it may be a good idea to back up your database and files daily.Many security plugins offer automated backups. Of course, you can also use backup plugins like UpdraftPlus or Solid Security. You should store backup copies in various locations, such as Dropbox and Google Drive. It will give you peace of mind.How To Remove SQL Injection From Your SiteLets say you are already under attack and are dealing with an active SQL injection on your site. Its not like any of the preventative measures weve covered will help all that much. Heres what you can do to fight back and defend your site:Check your database for changes. Look for strange entries in user accounts, content, or plugin settings.Erase evil code. Scan your site with a security plugin like Wordfence or Sucuri to find and erase harmful code.Restore a clean backup. Is the damage vast? Restoring your site from an existing backup could be the best option.Change all passwords. Alter your passwords for the WordPress admin, the database, and the hosting account.Harden your site security. After cleaning your site, take the 11 steps we covered earlier to prevent future attacks.ConclusionHackers love weak sites. They look for easy ways to break in, steal data, and cause harm. One of the tricks they often use is SQL injection. If they find a way in, they can steal private data, alter your content, or even take over your site. Thats bad news both for you and your visitors.But here is the good news: You can stop them! It is possible to block these attacks before they happen by taking the correct steps. And you dont need to be a tech freak.Many people ignore website security until its too late. They think, Why would a hacker target my site? But hackers dont attack only big sites. They attack any site with weak security. So, even small blogs and new websites are in danger. Once a hacker gets in, this person can cause you lots of damage. Fixing a hacked site takes time, effort, and money. But stopping an attack before it happens? Thats much easier.Hackers dont sit and wait, so why should you? Thousands of sites get attacked daily, so dont let yours be the next one. Update your site, add a firewall, enable 2FA, and check your security settings. These small steps can help prevent giant issues in the future.Your site needs protection against the bad guys. You have worked hard to build it. Never neglect to update and protect it. After that, your site will be safer and sounder.
    0 التعليقات ·0 المشاركات ·103 مشاهدة
  • How To Fix Largest Contentful Paint Issues With Subpart Analysis
    smashingmagazine.com
    This article is a sponsored by DebugBearThe Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitors perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, thats bad for user experience and can also cause your site to rank lower in Google.When trying to fix LCP issues, its not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. Theyve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!Lets take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.The Four LCP SubpartsLCP subparts split the Largest Contentful Paint metric into four different components:Time to First Byte (TTFB): How quickly the server responds to the document request.Resource Load Delay: Time spent before the LCP image starts to download.Resource Load Time: Time spent downloading the LCP image.Element Render Delay: Time before the LCP element is displayed.The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.How To Measure LCP SubpartsOne way to measure how much each component contributes to the LCP score on your website is to use DebugBears website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.Whats happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and wont always be the case.Time To First ByteThe first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.In this example, we can see that creating the server connection doesnt take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.Resource Load DelayThe resource we want to load is the LCP image. Ideally, we just have an <img> tag near the top of the HTML, and the browser finds it right away and starts loading it.But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, theres a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.Resources Load DurationThe Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!In this example, the image is loaded from the same domain as the HTML. Thats good because the browser doesnt have to connect to a new server.Other techniques you can use to reduce load delay:Use a modern image format that provides better compression.Load images at a size that matches the size they are displayed at.Deprioritize other resources that might compete with the LCP image.Element Render DelayThe fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isnt ready to show it to the user yet!Luckily, in the example weve been looking at so far, the LCP image appears quickly after its been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.However, if the image finishes downloading before the page is ready to render, youll see an increase in render delay on the page. And thats fine! Youve improved your website speed overall, but after optimizing your image, youve uncovered a new bottleneck to focus on.LCP Subparts In Real User CrUX DataLooking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesnt match whats happening for real users!Thats why, in February 2025, Google started including subpart data in the CrUX data report. Its not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBears Web Vitals tab.One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.To make the data easier to interpret, Google only reports subpart data for images.If the LCP element is usually text on the page, then the subparts info wont be very helpful, as it wont apply to most of your visitors.But breaking down text LCP is relatively easy: everything thats not part of the TTFB score is render-delayed.Track Subparts On Your Website With Real User MonitoringLab data doesnt always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.Thats why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.ConclusionHaving more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations youre considering would really be impactful.
    0 التعليقات ·0 المشاركات ·103 مشاهدة
  • How To Build Confidence In Your UX Work
    smashingmagazine.com
    When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite a lot of reluctance and hesitation, especially from teams that have been burnt by empty promises and poor delivery in the past.Good UX has a huge impact on business. But often, we need to build up confidence in our upcoming UX projects. For me, an effective way to do that is to address critical bottlenecks and uncover hidden deficiencies the ones that affect the people Ill be working with.Lets take a closer look at what this can look like.This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns with live UX training coming up soon. Free preview.UX Doesnt Disrupt, It Solves ProblemsBottlenecks are usually the most disruptive part of any company. Almost every team, every unit, and every department has one. Its often well-known by employees as they complain about it, but it rarely finds its way to senior management as they are detached from daily operations.The bottleneck can be the only senior developer on the team, a broken legacy tool, or a confusing flow that throws errors left and right theres always a bottleneck, and its usually the reason for long waiting times, delayed delivery, and cutting corners in all the wrong places.We might not be able to fix the bottleneck. But for a smooth flow of work, we need to ensure that non-constraint resources dont produce more than the constraint can handle. All processes and initiatives must be aligned to support and maximize the efficiency of the constraint.So before doing any UX work, look out for things that slow down the organization. Show that its not UX work that disrupts work, but its internal disruptions that UX can help with. And once youve delivered even a tiny bit of value, you might be surprised how quickly people will want to see more of what you have in store for them.The Work Is Never Just The WorkMeetings, reviews, experimentation, pitching, deployment, support, updates, fixes unplanned work blocks other work from being completed. Exposing the root causes of unplanned work and finding critical bottlenecks that slow down delivery is not only the first step we need to take when we want to improve existing workflows, but it is also a good starting point for showing the value of UX.To learn more about the points that create friction in peoples day-to-day work, set up 1:1s with the team and ask them what slows them down. Find a problem that affects everyone. Perhaps too much work in progress results in late delivery and low quality? Or lengthy meetings stealing precious time?One frequently overlooked detail is that we cant manage work that is invisible. Thats why it is so important that we visualize the work first. Once we know the bottleneck, we can suggest ways to improve it. It could be to introduce 20% idle times if the workload is too high, for example, or to make meetings slightly shorter to make room for other work.The Theory Of ConstraintsThe idea that the work is never just the work is deeply connected to the Theory of Constraints discovered by Dr. Eliyahu M. Goldratt. It showed that any improvements made anywhere beside the bottleneck are an illusion.Any improvement after the bottleneck is useless because it will always remain starved, waiting for work from the bottleneck. And any improvements made before the bottleneck result in more work piling up at the bottleneck.Wait Time = Busy IdleTo improve flow, sometimes we need to freeze the work and bring focus to one single project. Just as important as throttling the release of work is managing the handoffs. The wait time for a given resource is the percentage of time that the resource is busy divided by the percentage of time its idle. If a resource is 50% utilized, the wait time is 50/50, or 1 unit.If the resource is 90% utilized, the wait time is 90/10, or 9 times longer. And if its 99% of time utilized, its 99/1, so 99 times longer than if that resource is 50% utilized. The critical part is to make wait times visible so you know when your work spends days sitting in someones queue.The exact times dont matter, but if a resource is busy 99% of the time, the wait time will explode.Avoid 100% OccupationOur goal is to maximize flow: that means exploiting the constraint but creating idle times for non-constraint to optimize system performance.One surprising finding for me was that any attempt to maximize the utilization of all resources 100% occupation across all departments can actually be counterproductive. As Goldratt noted, An hour lost at a bottleneck is an hour out of the entire system. An hour saved at a non-bottleneck is worthless.Recommended Read: The Phoenix ProjectI can only wholeheartedly recommend The Phoenix Project, an absolutely incredible book that goes into all the fine details of the Theory of Constraints described above.Its not a design book but a great book for designers who want to be more strategic about their work. Its a delightful and very real read about the struggles of shipping (albeit on a more technical side).Wrapping UpPeople dont like sudden changes and uncertainty, and UX work often disrupts their usual ways of working. Unsurprisingly, most people tend to block it by default. So before we introduce big changes, we need to get their support for our UX initiatives.We need to build confidence and show them the value that UX work can have for their day-to-day work. To achieve that, we can work together with them. Listening to the pain points they encounter in their workflows, to the things that slow them down.Once weve uncovered internal disruptions, we can tackle these critical bottlenecks and suggest steps to make existing workflows more efficient. Thats the foundation to gaining their trust and showing them that UX work doesnt disrupt but that its here to solve problems.New: How To Measure UX And Design ImpactMeet Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Watch the free preview or jump to the details. Video + UX TrainingVideo onlyVideo + UX Training$495.00 $799.00Get Video + UX Training25 video lessons (8h) + Live UX Training.100 days money-back-guarantee.Video only$250.00$395.00Get the video course25 video lessons (8h). Updated yearly.Also available as a UX Bundle with 2 video courses.
    0 التعليقات ·0 المشاركات ·128 مشاهدة
  • How To Fix Largest Contentful Issues With Subpart Analysis
    smashingmagazine.com
    This article is a sponsored by DebugBearThe Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitors perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, thats bad for user experience and can also cause your site to rank lower in Google.When trying to fix LCP issues, its not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. Theyve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!Lets take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.The Four LCP SubpartsLCP subparts split the Largest Contentful Paint metric into four different components:Time to First Byte (TTFB): How quickly the server responds to the document request.Resource Load Delay: Time spent before the LCP image starts to download.Resource Load Time: Time spent downloading the LCP image.Element Render Delay: Time before the LCP element is displayed.The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.How To Measure LCP SubpartsOne way to measure how much each component contributes to the LCP score on your website is to use DebugBears website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.Whats happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and wont always be the case.Time To First ByteThe first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.In this example, we can see that creating the server connection doesnt take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.Resource Load DelayThe resource we want to load is the LCP image. Ideally, we just have an <img> tag near the top of the HTML, and the browser finds it right away and starts loading it.But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, theres a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.Resources Load DurationThe Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!In this example, the image is loaded from the same domain as the HTML. Thats good because the browser doesnt have to connect to a new server.Other techniques you can use to reduce load delay:Use a modern image format that provides better compression.Load images at a size that matches the size they are displayed at.Deprioritize other resources that might compete with the LCP image.Element Render DelayThe fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isnt ready to show it to the user yet!Luckily, in the example weve been looking at so far, the LCP image appears quickly after its been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.However, if the image finishes downloading before the page is ready to render, youll see an increase in render delay on the page. And thats fine! Youve improved your website speed overall, but after optimizing your image, youve uncovered a new bottleneck to focus on.LCP Subparts In Real User CrUX DataLooking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesnt match whats happening for real users!Thats why, in February 2025, Google started including subpart data in the CrUX data report. Its not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBears Web Vitals tab.One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.To make the data easier to interpret, Google only reports subpart data for images.If the LCP element is usually text on the page, then the subparts info wont be very helpful, as it wont apply to most of your visitors.But breaking down text LCP is relatively easy: everything thats not part of the TTFB score is render-delayed.Track Subparts On Your Website With Real User MonitoringLab data doesnt always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.Thats why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.ConclusionHaving more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations youre considering would really be impactful.
    0 التعليقات ·0 المشاركات ·114 مشاهدة
  • The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks
    smashingmagazine.com
    When it comes to custom WordPress development, theme frameworks like Sage and Genesis have become a go-to solution, particularly for many agencies that rely on frameworks as an efficient starting point for client projects. They promise modern standards, streamlined workflows, and maintainable codebases. At face value, these frameworks seem to be the answer to building high-end, bespoke WordPress websites. However, my years of inheriting these builds as a freelance developer tell a different story one rooted in the reality of long-term maintenance, scalability, and developer onboarding.As someone who specializes in working with professional websites, Im frequently handed projects originally built by agencies using these frameworks. This experience has given me a unique perspective on the real-world implications of these tools over time. While they may look great in an initial pitch, their complexities often create friction for future developers, maintenance teams, and even the businesses they serve.This is not to say frameworks like Sage or Genesis are without merit, but they are far from the universal best practice theyre often touted to be.Below, Ill share the lessons Ive learned from inheriting and working with these setups, the challenges Ive faced, and why I believe a minimal WordPress approach often provides a better path forward.Why Agencies Use FrameworksFrameworks are designed to make WordPress development faster, cleaner, and optimized for current best practices. Agencies are drawn to these tools for several reasons:Current code standardsFrameworks like Sage adopt PSR-2 standards, composer-based dependency management, and MVC-like abstractions.Reusable componentsSages Blade templating encourages modularity, while Genesis relies on hooks for extensive customization.Streamlined design toolsIntegration with Tailwind CSS, SCSS, and Webpack (or newer tools like Bud) allows rapid prototyping.Optimized performanceFrameworks are typically designed with lightweight, bloat-free themes in mind.Team productivityBy creating a standardized approach, these frameworks promise efficiency for larger teams with multiple contributors.On paper, these benefits make frameworks an enticing choice for agencies. They simplify the initial build process and cater to developers accustomed to working with modern PHP practices and JavaScript-driven tooling. But whenever I inherit these projects years later, the cracks in the foundation begin to show.The Reality of Maintaining Framework-Based BuildsWhile frameworks have their strengths, my firsthand experience reveals recurring issues that arise when its time to maintain or extend these builds. These challenges arent theoretical they are issues Ive encountered repeatedly when stepping into an existing framework-based site.1. Abstraction Creates FrictionOne of the selling points of frameworks is their use of abstractions, such as Blade templating and controller-to-view separation. While these patterns make sense in theory, they often lead to unnecessary complexity in practice.For instance, Blade templates abstract PHP logic from WordPresss traditional theme hierarchy. This means errors like syntax issues dont provide clear stack traces pointing to the actual view file rather, they reference compiled templates. Debugging becomes a scavenger hunt, especially for developers unfamiliar with Sages structure.Take puck.news, a popular news outlet with millions of monthly visitors. When I first inherited their Sage-based theme, I had to bypass their Lando/Docker environment to use my own minimal Nginx localhost setup. The theme was incompatible with standard WordPress workflows, and I had to modify build scripts to support a traditional installation. Once I resolved the environment issues, I realized their build process was incredibly slow, with hot module replacement only partially functional (Blade template changes wouldnt reload). Each save took 45 seconds to compile.Faced with a decision to either upgrade to Sage 10 or rebuild the critical aspects, I opted for the latter. We drastically improved performance by replacing the Sage build with a simple Laravel Mix process. The new build process was reduced from thousands of lines to 80, significantly improving developer workflow. Any new developer could now understand the setup quickly, and future debugging would be far simpler.2. Inflexible PatternsWhile Sage encourages best practices, these patterns can feel rigid and over-engineered for simple tasks. Customizing basic WordPress features like adding a navigation menu or tweaking a post query requires following the frameworks prescribed patterns. This introduces a learning curve for developers who arent deeply familiar with Sage, and slows down progress for minor adjustments.Traditional WordPress theme structures, by contrast, are intuitive and widely understood. Any WordPress developer, regardless of background, can jump into a classic theme and immediately know where to look for templates, logic, and customizations. Sages abstraction layers, while well-meaning, limit accessibility to a smaller, more niche group of developers.3. Hosting Compatibility IssuesWhen working with Sage, issues with hosting environments are inevitable. For example, Sages use of Laravel Blade compiles templates into cached PHP files, often stored in directories like /wp-content/cache. Strict file system rules on managed hosting platforms, like WP Engine, can block these writes, leading to white screens or broken templates after deployment.This was precisely the issue I faced with paperlessparts.com, which was running a Sage theme on WP Engine. Every Git deployment resulted in a white screen of death due to PHP errors caused by Blade templates failing to save in the intended cache directory. The solution, recommended by WP Engine support, was to use the systems /tmp directory. While this workaround prevented deployment errors, it undermined the purpose of cached templates, as temporary files are cleared by PHPs garbage collection. Debugging and implementing this solution consumed significant time time that could have been avoided had the theme been designed with hosting compatibility in mind.4. Breaking Changes And Upgrade WoesUpgrading from Sage 9 to Sage 10 or even from older versions of Roots often feels like a complete rebuild. These breaking changes create friction for businesses that want long-term stability. Clients, understandably, are unwilling to pay for what amounts to refactoring without a visible return on investment. As a result, these sites stagnate, locked into outdated versions of the framework, creating problems with dependency management (e.g., Composer packages, Node.js versions) and documentation mismatches.One agency subcontract I worked on recently gave me insight into Sage 10s latest approach. Even on small microsites with minimal custom logic, I found the Bud-based build system sluggish, with watch processes taking over three seconds to reload. For developers accustomed to faster workflows, this is unacceptable. Additionally, Sage 10 introduced new patterns and directives that departed significantly from Sage 9, adding a fresh learning curve. While I understand the appeal of mirroring Laravels structure, I couldnt shake the feeling that this complexity was unnecessary for WordPress. By sticking to simpler approaches, the footprint could be smaller, the performance faster, and the maintenance much easier.The Cost Of Over-EngineeringThe issues above boil down to one central theme: over-engineering.Frameworks like Sage introduce complexity that, while beneficial in theory, often outweighs the practical benefits for most WordPress projects.When you factor in real-world constraints like tight budgets, frequent developer turnover, and the need for intuitive codebases the case for a minimal approach becomes clear.Minimal WordPress setups embrace simplicity:No abstraction for abstractions sakeTraditional WordPress theme hierarchy is straightforward, predictable, and accessible to a broad developer audience.Reduced tooling overheadAvoiding reliance on tools like Webpack or Blade removes potential points of failure and speeds up workflows.Future-proofingA standard theme structure remains compatible with WordPress core updates and developer expectations, even a decade later.In my experience, minimal setups foster easier collaboration and faster problem-solving. They focus on solving the problem rather than adhering to overly opinionated patterns.Real World ExampleLike many things, this all sounds great and makes sense in theory, but what does it look like in practice? Seeing is believing, so Ive created a minimal theme that exemplifies some of the concepts Ive described here. This theme is a work in progress, and there are plenty of areas where it needs work. It provides the top features that custom WordPress developers seem to want most in a theme framework.View Code in GitHubModern FeaturesBefore we dive in, Ill list out some of the key benefits of whats going on in this theme. Above all of these, working minimally and keeping things simple and easy to understand is by far the largest benefit, in my opinion.A watch task that compiles and reloads in under 100ms;Sass for CSS preprocessing coupled with CSS written in BEM syntax;Native ES modules;Composer package management;Twig view templating;View-controller pattern;Namespaced PHP for isolation;Built-in support for the Advanced Custom Fields plugin;Global context variables for common WordPress data: site_url, site_description, site_url, theme_dir, theme_url, primary_nav, ACF custom fields, the_title(), the_content().Templating LanguageTwig is included with this theme, and it is used to load a small set of commonly used global context variables such as theme URL, theme directory, site name, site URL, and so on. It also includes some core functions as well, like the_content(), the_title(), and others youd routinely often use during the process of creating a custom theme. These global context variables and functions are available for all URLs. While it could be argued that Twig is an unnecessary additional abstraction layer when were trying to establish a minimal WordPress setup, I chose to include it because this type of abstraction is included in Sage. But its also for a few other important reasons:Old,Dependable, andStable.You wont need to worry about any future breaking changes in future versions, and its widely in use today. All the features I commonly see used in Sage Blade templates can easily be handled with Twig similarly. There really isnt anything you can do with Blade that isnt possible with Twig.Blade is a great templating language, but its best suited for Laravel, in my opinion. BladeOne does provide a good way to use it as a standalone templating engine, but even then, its still not as performant under pressure as Twig. Twigs added performance, when used with small, efficient contexts, allows us to avoid the complexity that comes with caching view output. Compile-on-the-fly Twig is very close to the same speed as raw PHP in this use case.Most importantly, Twig was built to be portable. It can be installed with composer and used within the theme with just 55 lines of code.Now, in a real project, this would probably be more than 55 lines, but either way, it is, without a doubt, much easier to understand and work with than Blade. Blade was built for use in Laravel, and its just not nearly as portable. It will be significantly easier to identify issues, track them down with a direct stack trace, and fix them with Twig.The view context in this theme is deliberately kept sparse, during a site build youll add what you specifically need for a particular site. A lean context for your views helps with performance and workflow.Models & ControllersThe template hierarchy follows the patterns of good ol WordPress, and while some developers dont like this, it is undoubtedly the most widely accepted and commonly understood standard. Each standard theme file uses a model where you define your data structures with PHP and hand off the theme as the context to a .twig view file.Developers like the structure of separating server-side logic from a template, and in a classic MVC/MVVC pattern, we have our model, view, and controller. Here, Im using the standard WordPress theme templates as models.Currently, template files include some useful basics. Youre likely familiar with these standard templates, but Ill list them here for posterity:404.php: Displays a custom Page Not Found message when a visitor tries to access a page that doesnt exist.archive.php: Displays a list of posts from a particular archive, such as a category, date, or tag archive.author.php: Displays a list of posts by a specific author, along with the authors information.category.php: Displays a list of posts from a specific category.footer.php: Contains the footer section of the theme, typically including closing HTML tags and widgets or navigation in the footer area.front-page.php: The template used for the sites front page, either static or a blog, depending on the site settings.functions.php: Adds custom functionality to the theme, such as registering menus and widgets or adding theme support for features like custom logos or post thumbnails.header.php: Contains the header section of the theme, typically including the sites title, meta tags, and navigation menu.index.php: The fallback template for all WordPress pages is used if no other more specific template (like category.php or single.php) is available.page.php: Displays individual static pages, such as About or Contact pages.screenshot.png: An image of the themes design is shown in the WordPress theme selector to give users a preview of the themes appearance.search.php: Displays the results of a search query, showing posts or pages that match the search terms entered by the user.single.php: Displays individual posts, often used for blog posts or custom post types.tag.php: Displays a list of posts associated with a specific tag.Extremely Fast Build Process For SCSS And JavaScriptThe build is curiously different in this theme, but out of the box, you can compile SCSS to CSS, work with native JavaScript modules, and have a live reload watch process with a tiny footprint. Look inside the bin/*.js files, and youll see everything thats happening.There are just two commands here, and all web developers should be familiar with them:WatchWhile developing, it will reload or inject JavaScript and CSS changes into the browser automatically using a Browsersync.BuildThis task compiles all top-level *.scss files efficiently. Theres room for improvement, but keep in mind this theme serves as a concept.Now for a curveball: there is no compile process for JavaScript. File changes will still be injected into the browser with hot module replacement during watch mode, but we dont need to compile anything.WordPress will load theme JavaScript as native ES modules, using WordPress 6.5s support for ES modules. My reasoning is that many sites now pass through Cloudflare, so modern compression is handled for JavaScript automatically. Many specialized WordPress hosts do this as well. When comparing minification to GZIP, its clear that minification provides trivial gains in file reduction. The vast majority of file reduction is provided by CDN and server compression. Based on this, I believe the benefits of a fast workflow far outweigh the additional overhead of pulling in build steps for webpack, Rollup, or other similar packaging tools.Were fortunate that the web fully supports ES modules today, so there is really no reason why we should need to compile JavaScript at all if were not using a JavaScript framework like Vue, React, or Svelte.A Contrarian ApproachMy perspective and the ideas Ive shared here are undoubtedly contrarian. Like anything alternative, this is bound to ruffle some feathers. Frameworks like Sage are celebrated in developer circles, with strong communities behind them. For certain use cases like large-scale, enterprise-level projects with dedicated development teams they may indeed be the right fit.For the vast majority of WordPress projects I encounter, the added complexity creates more problems than it solves. As developers, our goal should be to build solutions that are not only functional and performant but also maintainable and approachable for the next person who inherits them.Simplicity, in my view, is underrated in modern web development. A minimal WordPress setup, tailored to the specific needs of the project without unnecessary abstraction, is often the leaner, more sustainable choice.ConclusionInheriting framework-based projects has taught me invaluable lessons about the real-world impact of theme frameworks. While they may impress in an initial pitch or during development, the long-term consequences of added complexity often outweigh the benefits. By adopting a minimal WordPress approach, we can build sites that are easier to maintain, faster to onboard new developers, and more resilient to change.Modern tools have their place, but minimalism never goes out of style. When you choose simplicity, you choose a codebase that works today, tomorrow, and years down the line. Isnt that what great web development is all about?
    0 التعليقات ·0 المشاركات ·123 مشاهدة
  • Sunshine And March Vibes (2025 Wallpapers Edition)
    smashingmagazine.com
    With the days getting noticeably longer in the northern hemisphere, the sun coming out, and the flowers blooming, March fuels us with fresh energy. And even if spring is far away in your part of the world, you might feel that 2025 has gained full speed by now the perfect opportunity to put all those plans youve made and ideas youve been carrying around to action!To cater for some extra inspiration this March, artists and designers from across the globe once again challenged their creative skills and designed a new batch of desktop wallpapers to accompany you through the month. As every month, youll find their artworks compiled below together with some timeless March favorites from our archives that are just too good to be forgotten.This post wouldnt exist without the kind support of our wonderful community who diligently contributes their designs each month anew to keep the steady stream of wallpapers flowing. So, a huge thank-you to everyone who shared their artwork with us this time around! If you, too, would like to get featured in one of our upcoming wallpapers posts, please dont hesitate to join in. We cant wait to see what youll come up with! Happy March!You can click on every image to see a larger preview.We respect and carefully consider the ideas and motivation behind each and every artists work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werent anyhow influenced by us but rather designed from scratch by the artists themselves.Submit your wallpaper design! Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join inBee-utiful SmileDesigned by Doreen Bethge from Germany.previewwith calendar: 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3200x2000without calendar: 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3200x2000Coffee BreakDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Rosa ParksMarch, the month of transition between winter and spring, is dedicated to Rosa Parks and her great phrase: You must never be fearful about what you are doing when it is right. Designed by Veronica Valenzuela from Spain.previewwith calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440So TireDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Time To Wake UpRays of sunlight had cracked into the bears cave. He slowly opened one eye and caught a glimpse of nature in blossom. Is it spring already? Oh, but he is so sleepy. He doesnt want to wake up, not just yet. So he continues dreaming about those sweet sluggish days while everything around him is blooming. Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Music From The PastDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Northern LightsSpring is getting closer, and we are waiting for it with open arms. This month, we want to enjoy discovering the northern lights. To do so, we are going to Alaska, where we have the faithful company of our friend White Fang. Designed by Veronica Valenzuela Jimenez from Spain.previewwithout calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Queen BeeSpring is coming! Birds are singing, flowers are blooming, bees are flying Enjoy this month! Designed by Melissa Bogemans from Belgium.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440BotanicaDesigned by Vlad Gerasimov from Georgia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Lets SpringAfter some freezing months, its time to enjoy the sun and flowers. Its party time, colours are coming, so lets spring! Designed by Colorsfera from Spain.previewwithout calendar: 320x480, 1024x768, 1024x1024, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Spring BirdDesigned by Nathalie Ouederni from France.previewwithout calendar: 1024x768, 1280x1024, 1440x900, 1680x1200, 1920x1200, 2560x1440Explore The ForestThis month, I want to go to the woods and explore my new world in sunny weather. Designed by Zi-Cing Hong from Taiwan.previewwithout calendar: 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Tacos To The Moon And BackDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160DaydreamingDaydreaming of better things, of lovely things, of saddening things. Designed by Bhabna Basak from India.previewwithout calendar: 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440BalletA day, even a whole month, isnt enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so. Designed by Ana Masnikosa from Belgrade, Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1040, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440AwakeningI am the kind of person who prefers the cold but I do love spring since its the magical time when flowers and trees come back to life and fill the landscape with beautiful colors. Designed by Maria Keller from Mexico.previewwithout calendar: 320x480, 640x480, 640x1136, 750x1334, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1242x2208, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440MARCHing ForwardIf all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designers block and a DSLR. Designed by Paul Bupe Jr from Statesboro, GA.previewwithout calendar: 1024x768, 1280x1024, 1440x900, 1920x1080, 1920x1200, 2560x1440JingzheJngzh is the third of the 24 solar terms in the traditional East Asian calendars. The word means the awakening of hibernating insects. is to start and means hibernating insects. Traditional Chinese folklore says that during Jingzhe, thunderstorms will wake up the hibernating insects, which implies that the weather is getting warmer. Designed by Sunny Hong from Taiwan.previewwithout calendar: 800x600, 1280x720, 1280x1024, 1366x768, 1400x1050, 1680x1200, 1920x1080, 2560x1440Fresh LemonsDesigned by Nathalie Ouederni from France.previewwithout calendar: 320x480, 1024x768, 1280x1024, 1440x900, 1600x1200, 1680x1200, 1920x1200, 2560x1440Pizza TimeWho needs an excuse to look at pizza all month? Designed by James Mitchell from the United Kingdom.previewwithout calendar: 1280x720, 1280x800, 1366x768, 1440x900, 1680x1050, 1920x1080, 1920x1200, 2560x1440, 2880x1800QuestionsDoodles are slowly becoming my trademark, so I just had to use them to express this phrase Im fond of recently. A bit enigmatic, philosophical. Inspiring, isnt it? Designed by Marta Paderewska from Poland.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440The UnknownI made a connection, between the dark side and the unknown lighted and catchy area. Designed by Valentin Keleti from Romania.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Waiting For SpringAs days are getting longer again and the first few flowers start to bloom, we are all waiting for spring to finally arrive. Designed by Naioo from Germany.previewwithout calendar: 1280x800, 1366x768, 1440x900, 1680x1050, 1920x1080, 1920x1200St. Patricks DayOn the 17th March, raise a glass and toast St. Patrick on St. Patricks Day, the Patron Saint of Ireland. Designed by Ever Increasing Circles from the United Kingdom.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1080x1080, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Spring Is ComingThis March, our calendar design epitomizes the heralds of spring. Soon enough, youll be waking up to the singing of swallows, in a room full of sunshine, filled with the empowering smell of daffodil, the first springtime flowers. Spring is the time of rebirth and new beginnings, creativity and inspiration, self-awareness, and inner reflection. Have a budding, thriving spring! Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1440x900, 1440x1050, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Happy Birthday Dr. Seuss!March 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things every day. Designed by Safia Begum from the United Kingdom.</ppreviewwithout calendar: 800x450, 1280x720, 1366x768, 1440x810, 1600x900, 1680x945, 1920x1080, 2560x1440Wake Up!Early spring in March is for me the time when the snow melts, everything isnt very colorful. This is what I wanted to show. Everything comes to life slowly, as this bear. Flowers are banal, so instead of a purple crocus we have a purple bird-harbinger. Designed by Marek Kedzierski from Poland.previewwithout calendar: 320x480, 1024x768, 1280x720, 1280x800, 1280x960, 1400x1050, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 2560x1440Spring Is InevitableSpring is round the corner. And very soon plants will grow on some other planets too. Lets be happy about a new cycle of life. Designed by Igor Izhik from Canada.previewwithout calendar: 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600Traveling To NeverlandThis month we become children and we travel with Peter Pan. Lets go to Neverland! Designed by Veronica Valenzuela from Spain.previewwithout calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Lets Get OutsideDesigned by Lvia Lnrt from Hungary.previewwithout calendar: 1024x768, 1280x1024, 1366x768, 1600x1200, 1680x1200, 1920x1080, 1920x1200, 2560x1440
    0 التعليقات ·0 المشاركات ·130 مشاهدة
  • The Human Element: Using Research And Psychology To Elevate Data Storytelling
    smashingmagazine.com
    Data storytelling is a powerful communication tool that combines data analysis with narrative techniques to create impactful stories. It goes beyond presenting raw numbers by transforming complex data into meaningful insights that can drive decisions, influence behavior, and spark action. When done right, data storytelling simplifies complex information, engages the audience, and compels them to act. Effective data storytelling allows UX professionals to effectively communicate the why behind their design choices, advocate for user-centered improvements, and ultimately create more impactful and persuasive presentations. This translates to stronger buy-in for research initiatives, increased alignment across teams, and, ultimately, products and experiences that truly meet user needs.For instance, The New York Times Snow Fall data story (Figure 1) used data to immerse readers in the tale of a deadly avalanche through interactive visuals and text, while The Guardians The Counted (Figure 2) powerfully illustrated police violence in the U.S. by humanizing data through storytelling. These examples show that effective data storytelling can leave lasting impressions, prompting readers to think differently, act, or make informed decisions.The importance of data storytelling lies in its ability to:Simplify complexityIt makes data understandable and actionable.Engage and persuadeEmotional and cognitive engagement ensures audiences not only understand but also feel compelled to act.Bridge gapsData storytelling connects the dots between information and human experience, making the data relevant and relatable.While there are numerous models of data storytelling, here are a few high-level areas of focus UX practitioners should have a grasp on: Narrative Structures: Traditional storytelling models like the heros journey (Vogler, 1992) or the Freytag pyramid (Figure 3) provide a backbone for structuring data stories. These models help create a beginning, rising action, climax, falling action, and resolution, keeping the audience engaged.Data Visualization: Broadly speaking, these are the tools and techniques for visualizing data in our stories. Interactive charts, maps, and infographics (Cairo, 2016) transform raw data into digestible visuals, making complex information easier to understand and remember.Narrative Structures For DataMoving beyond these basic structures, lets explore how more sophisticated narrative techniques can enhance the impact of data stories:The Three-Act StructureThis approach divides the data story into setup, confrontation, and resolution. It helps build context, present the problem or insight, and offer a solution or conclusion (Few, 2005).The Heros Journey (Data Edition)We can frame a data set as a problem that needs a hero to overcome. In this case, the hero is often the audience or the decision-maker who needs to use the data to solve a problem. The data itself becomes the journey, revealing challenges, insights, and, ultimately, a path to resolution.Example:Presenting data on declining user engagement could follow the heros journey. The call to adventure is the declining engagement. The challenges are revealed through data points showing where users are dropping off. The insights are uncovered through further analysis, revealing the root causes. The resolution is the proposed solution, supported by data, that the audience (the hero) can implement.Problems With Widely Used Data Storytelling ModelsMany data storytelling models follow a traditional, linear structure: data selection, audience tailoring, storyboarding with visuals, and a call to action. While these models aim to make data more accessible, they often fail to engage the audience on a deeper level, leading to missed opportunities. This happens because they prioritize the presentation of data over the experience of the audience, neglecting how different individuals perceive and process information.While existing data storytelling models adhere to a structured and technically correct approach to data creation, they often fall short of fully analyzing and understanding their audience. This gap weakens their overall effectiveness and impact.Cognitive OverloadPresenting too much data without context or a clear narrative overwhelms the audience. Instead of enlightenment, they experience confusion and disengagement. Its like trying to drink from a firehose; the sheer volume becomes counterproductive. This overload can be particularly challenging for individuals with cognitive differences who may require information to be presented in smaller, more digestible chunks.Emotional DisconnectData-heavy presentations often fail to establish an emotional connection, which is crucial for driving audience engagement and action. People are more likely to remember and act upon information that resonates with their feelings and values.Lack of PersonalizationMany data stories adopt a one-size-fits-all approach. Without tailoring the narrative to specific audience segments, the impact is diluted. A message that resonates with a CEO might not land with frontline employees. Over-Reliance on VisualsWhile visuals are essential for simplifying data, they are insufficient without a cohesive narrative to provide context and meaning, and they may not be accessible to all audience members. These shortcomings reveal a critical flaw: while current models successfully follow a structured data creation process, they often neglect the deeper, audience-centered analysis required for actual storytelling effectiveness. To bridge this gap,Data storytelling must evolve beyond simply presenting information it should prioritize audience understanding, engagement, and accessibility at every stage.Improving On Traditional ModelsTraditional models can be improved by focusing more on the following two critical components:Audience understanding: A greater focus can be concentrated on who the audience is, what they need, and how they perceive information. Traditional models should consider the unique characteristics and needs of specific audiences. This lack of audience understanding can lead to data stories that are irrelevant, confusing, or even misleading.Effective data storytelling requires a deep understanding of the audiences demographics, psychographics, and information needs. This includes understanding their level of knowledge about the topic, their prior beliefs and attitudes, and their motivations for seeking information. By tailoring the data story to a specific audience, storytellers can increase engagement, comprehension, and persuasion.Psychological principles: These models could be improved with insights from psychology that explain how people process information and make decisions. Without these elements, even the most beautifully designed data story may fall flat. Traditional models of data storytelling can be improved with two critical components that are essential for creating impactful and persuasive narratives: audience understanding and psychological principles.By incorporating audience understanding and psychological principles into their storytelling process, data storytellers can create more effective and engaging narratives that resonate with their audience and drive desired outcomes.Persuasion In Data StorytellingAll storytelling involves persuasion. Even if its a poorly told story and your audience chooses to ignore your message, youve persuaded them to do that. When your audience feels that you understand them, they are more likely to be persuaded by your message. Data-driven stories that speak to their hearts and minds are more likely to drive action. You can frame your message effectively when you have a deeper understanding of your audience.Applying Psychological Principles To Data StorytellingHumans process information based on psychological cues such as cognitive ease, social proof, and emotional appeal. By incorporating these principles, data storytellers can make their narratives more engaging, memorable, and persuasive.Psychological principles help data storytellers tap into how people perceive, interpret, and remember information. The Theory of Planned BehaviorWhile there is no single truth when it comes to how human behavior is created or changed, it is important for a data storyteller to use a theoretical framework to ensure they address the appropriate psychological factors of their audience. The Theory of Planned Behavior (TPB) is a commonly cited theory of behavior change in academic psychology research and courses. Its useful for creating a reasonably effective framework to collect audience data and build a data story around it.The TPB (Ajzen 1991) (Figure 5) aims to predict and explain human behavior. It consists of three key components:AttitudeThis refers to the degree to which a person has a favorable or unfavorable evaluation of the behavior in question. An example of attitudes in the TPB is a persons belief about the importance of regular exercise for good health. If an individual strongly believes that exercise is beneficial, they are likely to have a favorable attitude toward engaging in regular physical activity.Subjective NormsThese are the perceived social pressures to perform or not perform the behavior. Keeping with the exercise example, this would be how a person thinks their family, peers, community, social media, and others perceive the importance of regular exercise for good health. Perceived Behavioral ControlThis component reflects the perceived ease or difficulty of performing the behavior. For our physical activity example, does the individual believe they have access to exercise in terms of time, equipment, physical capability, and other potential aspects that make them feel more or less capable of engaging in the behavior?As shown in Figure 5, these three components interact to create behavioral intentions, which are a proxy for actual behaviors that we often dont have the resources to measure in real-time with research participants (Ajzen, 1991).UX researchers and data storytellers should develop a working knowledge of the TPB or another suitable psychological theory before moving on to measure the audiences attitudes, norms, and perceived behavioral control. We have included additional resources to support your learning about the TPB in the references section of this article.How To Understand Your Audience And Apply Psychological PrinciplesOK, weve covered the importance of audience understanding and psychology. These two principles serve as the foundation of the proposed model of storytelling were putting forth. Lets explore how to integrate them into your storytelling process.Introducing The Audience Research Informed Data Storytelling Model (ARIDSM)At the core of successful data storytelling lies a deep understanding of your audiences psychology. Heres a five-step process to integrate UX research and psychological principles effectively into your data stories:Step 1: Define Clear ObjectivesBefore diving into data, its crucial to establish precisely what you aim to achieve with your story. Do you want to inform, persuade, or inspire action? What specific message do you want your audience to take away?Why it matters: Defining clear objectives provides a roadmap for your storytelling journey. It ensures that your data, narrative, and visuals are all aligned toward a common goal. Without this clarity, your story risks becoming unfocused and losing its impact.How to execute Step 1: Start by asking yourself:What is the core message I want to convey?What do I want my audience to think, feel, or do after experiencing this story?How will I measure the success of my data story?Frame your objectives using action verbs and quantifiable outcomes. For example, instead of raise awareness about climate change, aim to persuade 20% of the audience to adopt one sustainable practice.Example:Imagine youre creating a data story about employee burnout. Your objective might be to convince management to implement new policies that promote work-life balance, with the goal of reducing reported burnout cases by 15% within six months.Step 2: Conduct UX Research To Understand Your AudienceThis step involves gathering insights about your audience: their demographics, needs, motivations, pain points, and how they prefer to consume information.Why it matters: Understanding your audience is fundamental to crafting a story that resonates. By knowing their preferences and potential biases, you can tailor your narrative and data presentation to capture their attention and ensure the message is clearly understood.How to execute Step 2: Employ UX research methods like surveys, interviews, persona development, and testing the message with potential audience members.Example:If your data story aims to encourage healthy eating habits among college students, your research might conduct a survey of students to determine what types of attitudes exist towards specific types of healthy foods for eating, to apply that knowledge in your data story.Step 3: Analyze and Select Relevant Audience DataThis step bridges the gap between raw data and meaningful insights. It involves exploring your data to identify patterns, trends, and key takeaways that support your objectives and resonate with your audience.Why it matters: Careful data analysis ensures that your story is grounded in evidence and that youre using the most impactful data points to support your narrative. This step adds credibility and weight to your story, making it more convincing and persuasive.How to execute Step 3:Clean and organize your data.Ensure accuracy and consistency before analysis.Identify key variables and metrics.This will be determined by the psychological principle you used to inform your research. Using the TPB, we might look closely at how we measured social norms to understand directionally how the audience perceives social norms around the topic of the data story you are sharing, allowing you to frame your call to action in ways that resonate with these norms. You might run a variety of statistics at this point, including factor analysis to create groups based on similar traits, t-tests to determine if averages on your measurements are significantly different between groups, and correlations to see if there might be an assumed direction between scores on various items.Example:If your objective is to demonstrate the effectiveness of a new teaching method, analyzing how your audience perceives their peers to be open to adopting new methods, their belief that they are in control over the decision to use a new teaching method, and their attitude towards the effectiveness of their current teaching methods to create groups that have various levels of receptivity in trying new methods, allowing you to later tailor your data story for each group.Step 4: Apply The Theory of Planned Behavior Or Your Psychological Principle Of Choice [Done Simultaneous With Step 3]In this step, you will see that The Theory of Planned Behavior (TPB) provides a robust framework for understanding the factors that drive human behavior. It posits that our intentions, which are the strongest predictors of our actions, are shaped by three core components: attitudes, subjective norms, and perceived behavioral control. By consciously incorporating these elements into your data story, you can significantly enhance its persuasive power.Why it matters: The TPB offers valuable insights into how people make decisions. By aligning your narrative with these psychological drivers, you increase the likelihood of influencing your audiences intentions and, ultimately, their behavior. This step adds a layer of strategic persuasion to your data storytelling, making it more impactful and effective.How to execute Step 4:Heres how to leverage the TPB in your data story:Influence Attitudes: Present data and evidence that highlight the positive consequences of adopting the desired behavior. Frame the behavior as beneficial, valuable, and aligned with the audiences values and aspirations. This is where having a deep knowledge of the audience is helpful. Lets imagine you are creating a data story on exercise and your call to action promoting exercise daily. If you know your audience has a highly positive attitude towards exercise, you can capitalize on that and frame your language around the benefits of exercising, increasing exercise, or specific exercises that might be best suited for the audience. Its about framing exercise not just as a physical benefit but as a holistic improvement to their life. You can also tie it to their identity, positioning exercise as an integral part of living the kind of life they aspire to. Shape Subjective Norms: Demonstrate that the desired behavior is widely accepted and practiced by others, especially those the audience admires or identifies with. Knowing ahead of time if your audience thinks daily exercise is something their peers approve of or engage in will allow you to shape your messaging accordingly. Highlight testimonials, success stories, or case studies from individuals who mirror the audiences values. If you were to find that the audience does not consider exercise to be normative amongst peers, you would look for examples of similar groups of people who do exercise. For example, if your audience is in a certain age group, you might focus on what data you have that supports a large percentage of those in their age group engaging in exercise.Enhance Perceived Behavioral Control: Address any perceived barriers to adopting the desired behavior and provide practical solutions. For instance, when promoting daily exercise, its important to acknowledge the common obstacles people face lack of time, resources, or physical capability and demonstrate how these can be overcome.Step 5: Craft A Balanced And Persuasive NarrativeThis is where you synthesize your data, audience insights, psychological principles (including the TPB), and storytelling techniques into a compelling and persuasive narrative. Its about weaving together the logical and emotional elements of your story to create an experience that resonates with your audience and motivates them to act.Why it matters: A well-crafted narrative transforms data from dry statistics into a meaningful and memorable experience. It ensures that your audience not only understands the information but also feels connected to it on an emotional level, increasing the likelihood of them internalizing the message and acting upon it.How to execute Step 5:Structure your story strategically: Use a clear narrative arc that guides your audience through the information. Begin by establishing the context and introducing the problem, then present your data-driven insights in a way that supports your objectives and addresses the TPB components. Conclude with a compelling call to action that aligns with the attitudes, norms, and perceived control you've cultivated throughout the narrative.Example:In a data story about promoting exercise, you could:Determine what stories might be available using the data you have collected or obtained. In this example, lets say you work for a city planning office and have data suggesting people arent currently biking as frequently as they could, even if they are bike owners.Begin with a relatable story about lack of exercise and its impact on peoples lives. Then, present data on the benefits of cycling, highlighting its positive impact on health, socializing, and personal feelings of well-being (attitudes).Integrate TPB elements: Showcase stories of people who have successfully incorporated cycling into their daily commute (subjective norms). Provide practical tips on bike safety, route planning, and finding affordable bikes (perceived behavioral control).Use infographics to compare commute times and costs between driving and cycling. Show maps of bike-friendly routes and visually appealing images of people enjoying cycling.Call to action: Encourage the audience to try cycling for a week and provide links to resources like bike share programs, cycling maps, and local cycling communities.Evaluating The MethodOur next step is to test our hypothesis that incorporating audience research and psychology into creating a data story will lead to more powerful results. We have conducted preliminary research using messages focused on climate change, and our results suggest some support for our assertion.We purposely chose a controversial topic because we believe data storytelling can be a powerful tool. If we want to truly realize the benefits of effective data storytelling, we need to focus on topics that matter. We also know that academic research suggests it is more difficult to shift opinions or generate behavior around topics that are polarizing (at least in the US), such as climate change.We are not ready to share the full results of our study. We will share those in an academic journal and in conference proceedings. Here is a look at how we set up the study and how you might do something similar when either creating a data story using our method or doing your own research to test our model. You will see that it closely aligns with the model itself, with the added steps of testing the message against a control message and taking measurements of the actions the message(s) are likely to generate.Step 1: We chose our topic and the data set we wanted to explore. As I mentioned, we purposely went with a polarizing topic. My academic background was in messaging around conservation issues, so we explored that. We used data from a publicly available data set that states July 2023 was the hottest month ever recorded.Step 2: We identified our audience and took basic measurements. We decided our audience would be members of the general public who do not have jobs working directly with climate data or other relevant fields for climate change scientists. We wanted a diverse range of ages and backgrounds, so we screened for this in our questions on the survey to measure the TPB components as well. We created a survey to measure the elements of the TPB as it relates to climate change and administered the survey via a Google Forms link that we shared directly, on social media posts, and in online message boards related to topics of climate change and survey research.Step 3: We analyzed our data and broke our audience into groups based on key differences. This part required a bit of statistical know-how. Essentially, we entered all of the responses into a spreadsheet and ran a factor analysis to define groups based on shared attributes. In our case, we found two distinct groups for our respondents. We then looked deeper into the individual differences between the groups, e.g., group 1 had a notably higher level of positive attitude towards taking action to remediate climate change.Step 4 [remember this happens simultaneously with step 3]: We incorporated aspects of the TPB in how we framed our data analysis. As we created our groups and looked at the responses to the survey, we made sure to note how this might impact the story for our various groups. Using our previous example, a group with a higher positive attitude toward taking action might need less convincing to do something about climate change and more information on what exactly they can do.Table 1 contains examples of the questions we asked related to the TPB. We used the guidance provided here to generate the survey items to measure the TPB related to climate change activism. Note that even the academic who created the TPB states there are no standardized questions (PDF) validated to measure the concepts for each individual topic. Item Measures Scale How beneficial do you believe individual actions are compared to systemic changes (e.g., government policies) in tackling climate change? Attitude 1 to 5 with 1 being not beneficial and 5 being extremely beneficial How much do you think the people you care about (family, friends, community) expect you to take action against climate change? Subjective Norms 1 to 5 with 1 being they do not expect me to take action and 5 being they expect me to take action How confident are you in your ability to overcome personal barriers when trying to reduce your environmental impact? Perceived Behavioral Control 1 to 5 with 1 being not at all confident and 5 being extremely confident Table 1: Examples of questions we used to measure the TPB factors. We asked multiple questions for each factor and then generated a combined mean score for each component.Step 5: We created data stories aligned with the groups and a control story. We created multiple stories to align with the groups we identified in our audience. We also created a control message that lacked substantial framing in any direction. See below for an example of the control data story (Figure 7) and one of the customized data stories (Figure 8) we created.Step 6: We released the stories and took measurements of the likelihood of acting. Specific to our study, we asked the participants how likely they were to Click here to LEARN MORE. Our hypothesis was that individuals would express a notably higher likelihood to want to click to learn more on the data story aligned with their grouping, as compared to the competing group and the control group.Step 7: We analyzed the differences between the preexisting groups and what they stated was their likelihood of acting. As I mentioned, our findings are still preliminary, and we are looking at ways to increase our response rate so we can present statistically substantiated findings. Our initial findings are that we do see small differences between the responses to the tailored data stories and the control data story. This is directionally what we would be expecting to see. If you are going to conduct a similar study or test out your messages, you would also be looking for results that suggest your ARIDS-derived message is more likely to generate the expected outcome than a control message or a non-tailored message.Overall, we feel there is an exciting possibility and that future research will help us refine exactly what is critical about generating a message that will have a positive impact on your audience. We also expect there are better models of psychology to use to frame your measurements and message depending on the audience and topic.For example, you might feel Maslows hierarchy of needs is more relevant to your data storytelling. You would want to take measurements related to these needs from your audience and then frame the data story using how a decision might help meet their needs.Elevate Your Data StorytellingTraditional models of data storytelling, while valuable, often fall short of effectively engaging and persuading audiences. This is primarily due to their neglect of crucial aspects such as audience understanding and the application of psychological principles. By incorporating these elements into the data storytelling process, we can create more impactful and persuasive narratives.The five-step framework proposed in this article defining clear objectives, conducting UX research, analyzing data, applying psychological principles, and crafting a balanced narrative provides a roadmap for creating data stories that resonate with audiences on both a cognitive and emotional level. This approach ensures that data is not merely presented but is transformed into a meaningful experience that drives action and fosters change. As data storytellers, embracing this human-centric approach allows us to unlock the full potential of data and create narratives that truly inspire and inform.Effective data storytelling isnt a black box. You can test your data stories for effectiveness using the same research process we are using to test our hypothesis as well. While there are additional requirements in terms of time as a resource, you will make this back in the form of a stronger impact on your audience when they encounter your data story if it is shown to be significantly greater than the impact of a control message or other messages you were considering that dont incorporate the psychological traits of your audience.Please feel free to use our method and provide any feedback on your experience to the author.
    0 التعليقات ·0 المشاركات ·136 مشاهدة
  • Human-Centered Design Through AI-Assisted Usability Testing: Reality Or Fiction?
    smashingmagazine.com
    Unmoderated usability testing has been steadily growing more popular with the assistance of online UX research tools. Allowing participants to complete usability testing without a moderator, at their own pace and convenience, can have a number of advantages.The first is the liberation from a strict schedule and the availability of moderators, meaning that a lot more participants can be recruited on a more cost-effective and quick basis. It also lets your team see how users interact with your solution in their natural environment, with the setup of their own devices. Overcoming the challenges of distance and differences in time zones in order to obtain data from all around the globe also becomes much easier.However, forgoing the use of moderators also has its drawbacks. The moderator brings flexibility, as well as a human touch into usability testing. Since they are in the same (virtual) space as the participants, the moderator usually has a good idea of whats going on. They can react in real-time depending on what they witness the participant do and say. A moderator can carefully remind the participants to vocalize their thoughts. To the participant, thinking aloud in front of a moderator can also feel more natural than just talking to themselves. When the participant does something interesting, the moderator can prompt them for further comment. Meanwhile, a traditional unmoderated study lacks such flexibility. In order to complete tasks, participants receive a fixed set of instructions. Once they are done, they can be asked to complete a static questionnaire, and thats it.The feedback that the research & design team receives will be completely dependent on what information the participants provide on their own. Because of this, the phrasing of instructions and questions in unmoderated testing is extremely crucial. Although, even if everything is planned out perfectly, the lack of adaptive questioning means that a lot of the information will still remain unsaid, especially with regular people who are not trained in providing user feedback.If the usability test participant misunderstands a question or doesnt answer completely, the moderator can always ask for a follow-up to get more information. A question then arises: Could something like that be handled by AI to upgrade unmoderated testing?Generative AI could present a new, potentially powerful tool for addressing this dilemma once we consider their current capabilities. Large language models (LLMs), in particular, can lead conversations that can appear almost humanlike. If LLMs could be incorporated into usability testing to interactively enhance the collection of data by conversing with the participant, they might significantly augment the ability of researchers to obtain detailed personal feedback from great numbers of people. With human participants as the source of the actual feedback, this is an excellent example of human-centered AI as it keeps humans in the loop.There are quite a number of gaps in the research of AI in UX. To help with fixing this, we at UXtweak research have conducted a case study aimed at investigating whether AI could generate follow-up questions that are meaningful and result in valuable answers from the participants.Asking participants follow-up questions to extract more in-depth information is just one portion of the moderators responsibilities. However, it is a reasonably-scoped subproblem for our evaluation since it encapsulates the ability of the moderator to react to the context of the conversation in real time and to encourage participants to share salient information. Experiment Spotlight: Testing GPT-4 In Real-Time FeedbackThe focus of our study was on the underlying principles rather than any specific commercial AI solution for unmoderated usability testing. After all, AI models and prompts are being tuned constantly, so findings that are too narrow may become irrelevant in a week or two after a new version gets updated. However, since AI models are also a black box based on artificial neural networks, the method by which they generate their specific output is not transparent.Our results can show what you should be wary of to verify that an AI solution that you use can actually deliver value rather than harm. For our study, we used GPT-4, which at the time of the experiment was the most up-to-date model by OpenAI, also capable of fulfilling complex prompts (and, in our experience, dealing with some prompts better than the more recent GPT-4o). In our experiment, we conducted a usability test with a prototype of an e-commerce website. The tasks involved the common user flow of purchasing a product.Note: See our article published in the International Journal of Human-Computer Interaction for more detailed information about the prototype, tasks, questions, and so on).In this setting, we compared the results with three conditions:A regular static questionnaire made up of three pre-defined questions (Q1, Q2, Q3), serving as an AI-free baseline. Q1 was open-ended, asking the participants to narrate their experiences during the task. Q2 and Q3 can be considered non-adaptive follow-ups to Q1 since they asked participants more directly about usability issues and to identify things that they did not like.The question Q1, serving as a seed for up to three GPT-4-generated follow-up questions as the alternative to Q2 and Q3.All three pre-defined questions, Q1, Q2, and Q3, each used as a seed for its own GPT-4 follow-up.The following prompt was used to generate the follow-up questions:To assess the impact of the AI follow-up questions, we then compared the results on both a quantitative and a qualitative basis. One of the measures that we analyzed is informativeness ratings of the responses based on how useful they are at elucidating new usability issues encountered by the user.As seen in the figure below, the informativeness dropped significantly between the seed questions and their AI follow-up. The follow-ups rarely helped identify a new issue, although they did help elaborate further details.The emotional reactions of the participants offer another perspective on AI-generated follow-up questions. Our analysis of the prevailing emotional valence based on the phrasing of answers revealed that, at first, the answers started with a neutral sentiment. Afterward, the sentiment shifted toward the negative.In the case of the pre-defined questions Q2 and Q3, this could be seen as natural. While question Seed 1 was open-ended, asking the participants to explain what they did during the task, Q2 and Q3 focused more on the negative usability issues and other disliked aspects. Curiously, the follow-up chains generally received even more negative receptions than their seed questions, and not for the same reason.Frustration was common as participants interacted with the GPT-4-driven follow-up questions. This is rather critical, considering that frustration with the testing process can sidetrack participants from taking usability testing seriously, hinder meaningful feedback, and introduce a negative bias.A major aspect that participants were frustrated with was redundancy. Repetitiveness, such as re-explaining the same usability issue, was quite common. While pre-defined follow-up questions yielded 27-28% of repeated answers (its likely that participants already mentioned aspects they disliked during the open-ended Q1), AI-generated questions yielded 21%.Thats not that much of an improvement, given that the comparison is made to questions that literally could not adapt to prevent repetition at all. Furthermore, when AI follow-up questions were added to obtain more elaborate answers for every pre-defined question, the repetition ratio rose further to 35%. In the variant with AI, participants also rated the questions as significantly less reasonable.Answers to AI-generated questions contained a lot of statements like I already said that and The obvious AI questions ignored my previous responses. The prevalence of repetition within the same group of questions (the seed question, its follow-up questions, and all of their answers) can be seen as particularly problematic since the GPT-4 prompt had been provided with all the information available in this context. This demonstrates that a number of the follow-up questions were not sufficiently distinct and lacked the direction that would warrant them being asked.Insights From The Study: Successes And PitfallsTo summarize the usefulness of AI-generated follow-up questions in usability testing, there are both good and bad points.Successes:Generative AI (GPT-4) excels at refining participant answers with contextual follow-ups.Depth of qualitative insights can be enhanced.Challenges:Limited capacity to uncover new issues beyond pre-defined questions.Participants can easily grow frustrated with repetitive or generic follow-ups.While extracting answers that are a bit more elaborate is a benefit, it can be easily overshadowed if the lack of question quality and relevance is too distracting. This can potentially inhibit participants natural behavior and the relevance of feedback if theyre focusing on the AI.Therefore, in the following section, we discuss what to be careful of, whether you are picking an existing AI tool to assist you with unmoderated usability testing or implementing your own AI prompts or even models for a similar purpose.Recommendations For PractitionersContext is the end-all and be-all when it comes to the usefulness of follow-up questions. Most of the issues that we identified with the AI follow-up questions in our study can be tied to the ignorance of proper context in one shape or another.Based on real blunders that GPT-4 made while generating questions in our study, we have meticulously collected and organized a list of the types of context that these questions were missing. Whether youre looking to use an existing AI tool or are implementing your own system to interact with participants in unmoderated studies, you are strongly encouraged to use this list as a high-level checklist. With it as the guideline, you can assess whether the AI models and prompts at your disposal can ask reasonable, context-sensitive follow-up questions before you entrust them with interacting with real participants.Without further ado, these are the relevant types of context:General Usability Testing Context.The AI should incorporate standard principles of usability testing in its questions. This may appear obvious, and it actually is. But it needs to be said, given that we have encountered issues related to this context in our study. For example, the questions should not be leading, ask participants for design suggestions, or ask them to predict their future behavior in completely hypothetical scenarios (behavioral research is much more accurate for that).Usability Testing Goal Context.Different usability tests have different goals depending on the stage of the design, business goals, or features being tested. Each follow-up question and the participants time used in answering it are valuable resources. They should not be wasted on going off-topic. For example, in our study, we were evaluating a prototype of a website with placeholder photos of a product. When the AI starts asking participants about their opinion of the displayed fake products, such information is useless to us.User Task Context.Whether the tasks in your usability testing are goal-driven or open and exploratory, their nature should be properly reflected in follow-up questions. When the participants have freedom, follow-up questions could be useful for understanding their motivations. By contrast, if your AI tool foolishly asks the participants why they did something closely related to the task (e.g., placing the specific item they were supposed to buy into the cart), you will seem just as foolish by association for using it.Design Context.Detailed information about the tested design (e.g., prototype, mockup, website, app) can be indispensable for making sure that follow-up questions are reasonable. Follow-up questions should require input from the participant. They should not be answerable just by looking at the design. Interesting aspects of the design could also be reflected in the topics to focus on. For example, in our study, the AI would occasionally ask participants why they believed a piece of information that was very prominently displayed in the user interface, making the question irrelevant in context.Interaction Context.If Design Context tells you what the participant could potentially see and do during the usability test, Interaction Context comprises all their actual actions, including their consequences. This could incorporate the video recording of the usability test, as well as the audio recording of the participant thinking aloud. The inclusion of interaction context would allow follow-up questions to build on the information that the participant already provided and to further clarify their decisions. For example, if a participant does not successfully complete a task, follow-up questions could be directed at investigating the cause, even as the participant continues to believe they have fulfilled their goal.Previous Question Context.Even when the questions you ask them are mutually distinct, participants can find logical associations between various aspects of their experience, especially since they dont know what you will ask them next. A skilled moderator may decide to skip a question that a participant already answered as part of another question, instead focusing on further clarifying the details. AI follow-up questions should be capable of doing the same to avoid the testing from becoming a repetitive slog.Question Intent Context.Participants routinely answer questions in a way that misses their original intent, especially if the question is more open-ended. A follow-up can spin the question from another angle to retrieve the intended information. However, if the participants answer is technically a valid reply but only to the word rather than the spirit of the question, the AI can miss this fact. Clarifying the intent could help address this.When assessing a third-party AI tool, a question to ask is whether the tool allows you to provide all of the contextual information explicitly.If AI does not have an implicit or explicit source of context, the best it can do is make biased and untransparent guesses that can result in irrelevant, repetitive, and frustrating questions.Even if you can provide the AI tool with the context (or if you are crafting the AI prompt yourself), that does not necessarily mean that the AI will do as you expect, apply the context in practice, and approach its implications correctly. For example, as demonstrated in our study, when a history of the conversation was provided within the scope of a question group, there was still a considerable amount of repetition.The most straightforward way to test the contextual responsiveness of a specific AI model is simply by conversing with it in a way that relies on context. Fortunately, most natural human conversation already depends on context heavily (saying everything would take too long otherwise), so that should not be too difficult. What is key is focusing on the varied types of context to identify what the AI model can and cannot do.The seemingly overwhelming number of potential combinations of varied types of context could pose the greatest challenge for AI follow-up questions.For example, human moderators may decide to go against the general rules by asking less open-ended questions to obtain information that is essential for the goals of their research while also understanding the tradeoffs.In our study, we have observed that if the AI asked questions that were too generically open-ended as a follow-up to seed questions that were open-ended themselves, without a significant enough shift in perspective, this resulted in repetition, irrelevancy, and therefore frustration. The fine-tuning of the AI models to achieve an ability to resolve various types of contextual conflict appropriately could be seen as a reliable metric by which the quality of the AI generator of follow-up questions could be measured.Researcher control is also key since tougher decisions that are reliant on the researchers vision and understanding should remain firmly in the researchers hands. Because of this, a combination of static and AI-driven questions with complementary strengths and weaknesses could be the way to unlock richer insights.A focus on contextual sensitivity validation can be seen as even more important while considering the broader social aspects. Among certain people, the trend-chasing and the general overhype of AI by the industry have led to a backlash against AI. AI skeptics have a number of valid concerns, including usefulness, ethics, data privacy, and the environment. Some usability testing participants may be unaccepting or even outwardly hostile toward encounters with AI.Therefore, for the successful incorporation of AI into research, it will be essential to demonstrate it to the users as something that is both reasonable and helpful. Principles of ethical research remain as relevant as ever. Data needs to be collected and processed with the participants consent and not breach the participants privacy (e.g. so that sensitive data is not used for training AI models without permission). Conclusion: Whats Next For AI In UX?So, is AI a game-changer that could break down the barrier between moderated and unmoderated usability research? Maybe one day. The potential is certainly there. When AI follow-up questions work as intended, the results are exciting. Participants can become more talkative and clarify potentially essential details.To any UX researcher whos familiar with the feeling of analyzing vaguely phrased feedback and wishing that they could have been there to ask one more question to drive the point home, an automated solution that could do this for them may seem like a dream. However, we should also exercise caution since the blind addition of AI without testing and oversight can introduce a slew of biases. This is because the relevance of follow-up questions is dependent on all sorts of contexts.Humans need to keep holding the reins in order to ensure that the research is based on actual solid conclusions and intents. The opportunity lies in the synergy that can arise from usability researchers and designers whose ability to conduct unmoderated usability testing could be significantly augmented.Humans + AI = Better InsightsThe best approach to advocate for is likely a balanced one. As UX researchers and designers, humans should continue to learn how to use AI as a partner in uncovering insights. This article can serve as a jumping-off point, providing a list of the AI-driven techniques potential weak points to be aware of, to monitor, and to improve on.
    0 التعليقات ·0 المشاركات ·164 مشاهدة
  • How OWASP Helps You Secure Your Full-Stack Web Applications
    smashingmagazine.com
    Security can be an intimidating topic for web developers. The vocabulary is rich and full of acronyms. Trends evolve quickly as hackers and analysts play a perpetual cat-and-mouse game. Vulnerabilities stem from little details we cannot afford to spend too much time on during our day-to-day operations.JavaScript developers already have a lot to take with the emergence of a new wave of innovative architectures, such as React Server Components, Next.js App Router, or Astro islands.So, lets have a focused approach. What we need is to be able to detect and palliate the most common security issues. A top ten of the most common vulnerabilities would be ideal.Meet The OWASP Top 10Guess what: there happens to be such a top ten of the most common vulnerabilities, curated by experts in the field! It is provided by the OWASP Foundation, and its an extremely valuable resource for getting started with security. OWASP stands for Open Worldwide Application Security Project. Its a nonprofit foundation whose goal is to make software more secure globally. It supports many open-source projects and produces high-quality education resources, including the OWASP top 10 vulnerabilities list.We will dive through each item of the OWASP top 10 to understand how to recognize these vulnerabilities in a full-stack application.Note: I will use Next.js as an example, but this knowledge applies to any similar full-stack architecture, even outside of the JavaScript ecosystem.Lets start our countdown towards a safer web!Number 10: Server-Side Request Forgery (SSRF)You may have heard about Server-Side Rendering, aka SSR. Well, you can consider SSRF to be its evil twin acronym.Server-Side Request Forgery can be summed up as letting an attacker fire requests using your backend server. Besides hosting costs that may rise up, the main problem is that the attacker will benefit from your servers level of accreditation. In a complex architecture, this means being able to target your internal private services using your own corrupted server.Here is an example. Our app lets a user input a URL and summarizes the content of the target page server-side using an AI SDK. A mischievous user passes localhost:3000 as the URL instead of a website theyd like to summarize. Your server will fire a request against itself or any other service running on port 3000 in your backend infrastructure. This is a severe SSRF vulnerability!Youll want to be careful when firing requests based on user inputs, especially server-side.Number 9: Security Logging And Monitoring FailuresI wish we could establish a telepathic connection with our beloved Node.js server running in the backend. Instead, the best thing we have to see what happens in the cloud is a dreadful stream of unstructured pieces of text we name logs.Yet we will have to deal with that, not only for debugging or performance optimization but also because logs are often the only information youll get to discover and remediate a security issue.As a starter, you might want to focus on logging the most important transactions of your application exactly like you would prioritize writing end-to-end tests. In most applications, this means login, signup, payouts, mail sending, and so on. In a bigger company, a more complete telemetry solution is a must-have, such as Open Telemetry, Sentry, or Datadog.If you are using React Server Components, you may need to set up a proper logging strategy anyway since its not possible to debug them directly from the browser as we used to do for Client components.Number 8: Software And Data Integrity FailuresThe OWASP top 10 vulnerabilities tend to have various levels of granularity, and this one is really a big family. Id like to focus on supply chain attacks, as they have gained a lot of popularity over the years.You may have heard about the Log4J vulnerability. It was very publicized, very critical, and very exploited by hackers. Its a massive supply chain attack.In the JavaScript ecosystem, you most probably install your dependencies using NPM. Before picking dependencies, you might want to craft yourself a small list of health indicators. Is the library maintained and tested with proper code?Does it play a critical role in my application?Who is the main contributor?Did I spell it right when installing?For more serious business, you might want to consider setting up a Supply Chain Analysis (SCA) solution; GitHubs Dependabot is a free one, and Snyk and Datadog are other famous actors.Number 7: Identification And Authentication FailuresHere is a stereotypical vulnerability belonging to this category: your admin password is leaked. A hacker finds it. Boom, game over. Password management procedures are beyond the scope of this article, but in the context of full-stack web development, lets dive deep into how we can prevent brute force attacks using Next.js edge middlewares.Middlewares are tiny proxies written in JavaScript. They process requests in a way that is supposed to be very, very fast, faster than a normal Node.js endpoint, for example. They are a good fit for handling low-level processing, like blocking malicious IPs or redirecting users towards the correct translation of a page.One interesting use case is rate limiting. You can quickly improve the security of your applications by limiting peoples ability to spam your POST endpoints, especially login and signup.You may go even further by setting up a Web Applications Firewall (WAF). A WAF lets developers implement elaborate security rules. This is not something you would set up directly in your application but rather at the host level. For instance, Vercel has released its own WAF in 2024.Number 6: Vulnerable And Outdated ComponentsWe have discussed supply chain attacks earlier. Outdated components are a variation of this vulnerability, where you actually are the person to blame. Sorry about that.Security vulnerabilities are often discovered ahead of time by diligent security analysts before a mean attacker can even start thinking about exploiting them. Thanks, analysts friends! When this happens, they fill out a Common Vulnerabilities and Exposure and store that in a public database.The remedy is the same as for supply chain attacks: set up an SCA solution like Dependabot that will regularly check for the use of vulnerable packages in your application.Halfway breakI just want to mention at this point how much progress we have made since the beginning of this article. To sum it up:We know how to recognize an SSRF. This is a nasty vulnerability, and it is easy to accidentally introduce while crafting a super cool feature.We have identified monitoring and dependency analysis solutions as important pieces of support software for securing applications.We have figured out a good use case for Next.js edge middlewares: rate limiting our authentication endpoints to prevent brute force attacks.Its a good time to go grab a tea or coffee. But after that, come back with us because we are going to discover the five most common vulnerabilities affecting web applications!Number 5: Security MisconfigurationThere are so many configurations that we can mismanage. But lets focus on the most insightful ones for a web developer learning about security: HTTP headers.You can use HTTP response headers to pass on a lot of information to the users browser about whats possible or not on your website.For example, by narrowing down the Permissions-Policy headers, you can claim that your website will never require access to the users camera. This is an extremely powerful protection mechanism in case of a script injection attack (XSS). Even if the hacker manages to run a malicious script in the victims browser, the latter will not allow the script to access the camera.I invite you to observe the security configuration of any template or boilerplate that you use to craft your own websites. Do you understand them properly? Can you improve them? Answering these questions will inevitably lead you to vastly increase the safety of your websites!Number 4: Insecure DesignI find this one funny, although a bit insulting for us developers.Bad code is literally the fourth most common cause of vulnerabilities in web applications! You cant just blame your infrastructure team anymore.Design is actually not just about code but about the way we use our programming tools to produce software artifacts. In the context of full-stack JavaScript frameworks, I would recommend learning how to use them idiomatically, the same way youd want to learn a foreign language. Its not just about translating what you already know word-by-word. You need to get a grasp of how a native speaker would phrase their thoughts.Learning idiomatic Next.js is really, really hard. Trust me, I teach this framework to web developers. Next is all about client and server logic hybridization, and some patterns may not even transfer to competing frameworks with a different architecture like Astro.js or Remix.Hopefully, the Next.js core team has produced many free learning resources, including articles and documentation specifically focusing on security.I recommend reading Sebastian Markbges famous article How to Think About Security in Next.js as a starting point. If you use Next.js in a professional setting, consider organizing proper training sessions before you start working on high-stakes projects.Number 3: InjectionInjections are the epitome of vulnerabilities, the quintessence of breaches, and the paragon of security issues. SQL injections are typically very famous, but JavaScript injections are also quite common. Despite being well-known vulnerabilities, injections are still in the top 3 in the OWASP ranking!Injections are the reason why forcing a React component to render HTML is done through an unwelcoming dangerouslySetInnerHTML function.React doesnt want you to include user input that could contain a malicious script.The screenshot below is a demonstration of an injection using images. It could target a message board, for instance. The attacker misused the image posting system. They passed a URL that points towards an API GET endpoint instead of an actual image. Whenever your websites users see this post in their browser, an authenticated request is fired against your backend, triggering a payment!As a bonus, having a GET endpoint that triggers side-effects such as payment also constitutes a risk of Cross-Site Request Forgery (CSRF, which happens to be SSRF client-side cousin).Even experienced developers can be caught off-guard. Are you aware that dynamic route parameters are user inputs? For instance, [language]/page.jsx in a Next.js or Astro app. I often see clumsy attack attempts when logging them, like language being replaced by a path traversal like ../../../../passwords.txt.Zod is a very popular library for running server-side data validation of user inputs. You can add a transform step to sanitize inputs included in database queries, or that could land in places where they end up being executed as code.Number 2: Cryptographic FailuresA typical discussion between two developers that are in deep, deep trouble: We have leaked our database and encryption key. What algorithm was used to encrypt the password again? AES-128 or SHA-512? I dont know, arent they the same thing? They transform passwords into gibberish, right? Alright. We are in deep, deep trouble.This vulnerability mostly concerns backend developers who have to deal with sensitive personal identifiers (PII) or passwords. To be honest, I dont know much about these algorithms; I studied computer science way too long ago.The only thing I remember is that you need non-reversible algorithms to encrypt passwords, aka hashing algorithms. The point is that if the encrypted passwords are leaked, and the encryption key is also leaked, it will still be super hard to hack an account (you cant just reverse the encryption).In the State of JavaScript survey, we use passwordless authentication with an email magic link and one-way hash emails, so even as admins, we cannot guess a users email in our database.And number 1 is...Such suspense! We are about to discover that the top 1 vulnerability in the world of web development is...Broken Access Control! Tada.Yeah, the name is not super insightful, so let me rephrase it. Its about people being able to access other peoples accounts or people being able to access resources they are not allowed to. Thats more impressive when put this way.A while ago, I wrote an article about the fact that checking authorization within a layout may leave page content unprotected in Next.js. Its not a flaw in the frameworks design but a consequence of how React Server Components have a different model than their client counterparts, which then affects how the layout works in Next.Here is a demo of how you can implement a paywall in Next.js that doesnt protect anything.// app/layout.jsx// Using cookie-based authentication as usualasync function checkPaid() { const token = cookies.get("auth_token"); return await db.hasPayments(token);}// Running the payment check in a layout to apply it to all pages// Sadly, this is not how Next.js works!export default async function Layout() { // this won't work as expected!! const hasPaid = await checkPaid(); if (!hasPaid) redirect("/subscribe"); // then render the underlying page return <div>{children}</div>;}// this can be accessed directly// by adding RSC=1 to the request that fetches it!export default function Page() { return <div>PAID CONTENT</div>}What We Have Learned From The Top 5 VulnerabilitiesMost common vulnerabilities are tightly related to application design issues: Copy-pasting configuration without really understanding it.Having an improper understanding of the framework we use in inner working. Next.js is a complex beast and doesnt make our life easier on this point!Picking an algorithm that is not suited for a given task.These vulnerabilities are tough ones because they confront us to our own limits as web developers. Nobody is perfect, and the most experienced developers will inevitably write vulnerable code at some point in their lives without even noticing. How to prevent that? By not staying alone! When in doubt, ask around fellow developers; there are great chances that someone has faced the same issues and can lead you to the right solutions.Where To Head Now?First, I must insist that you have already done a great job of improving the security of your applications by reading this article. Congratulations!Most hackers rely on a volume strategy and are not particularly skilled, so they are really in pain when confronted with educated developers who can spot and fix the most common vulnerabilities.From there, I can suggest a few directions to get even better at securing your web applications:Try to apply the OWASP top 10 to an application you know well, either a personal project, your companys codebase, or an open-source solution.Give a shot at some third-party security tools. They tend to overflow developers with too much information but keep in mind that most actors in the field of security are aware of this issue and work actively to provide more focused vulnerability alerts.Ive added my favorite security-related resources at the end of the article, so youll have plenty to read!Thanks for reading, and stay secure!Resources For Further LearningAn interactive demo of an SSRF in a Next.js app and how to fix itOWASP Top 10An SSRF vulnerability that affected Next.js image optimization systemObserve React Server Components using Open TelemetryOpenTelemetry and open source Telemtry standard Log4J vulnerabilitySetting up rate limiting in a middleware using a Redis serviceVercel WAF annoucementMitre CVE databaseAn interactive demo of a CSRF vulnerability in a Next.js app and how to fix itA super complete guide on authentication specifically targeting web appsServer form validation with zod in Next.js (Astro has it built-in)Sanitization with zodSecure statically rendered paid content in Next.js and how layouts are a bad place to run authentication checksSmashing Magazine articles related to security (almost 50 matches at the time of writing!)This article is inspired by my talk at React Advanced London 2024, Securing Server-Rendered Applications: Next.js case, which is available to watch as a replay online.
    0 التعليقات ·0 المشاركات ·169 مشاهدة
  • How To Test And Measure Content In UX
    smashingmagazine.com
    Content testing is a simple way to test the clarity and understanding of the content on a page be it a paragraph of text, a user flow, a dashboard, or anything in between. Our goal is to understand how well users actually perceive the content that we present to them.Its not only about finding pain points and things that cause confusion or hinder users from finding the right answer on a page but also about if our content clearly and precisely articulates what we actually want to communicate.This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns with live UX training coming up soon. Free preview.Banana TestingA great way to test how well your design matches a users mental model is Banana Testing. We replace all key actions with the word Banana, then ask users to suggest what each action could prompt.Not only does it tell you if key actions are understood immediately and if they are in the right place but also if your icons are helpful and if interactive elements such as links or buttons are perceived as such.Content HeatmappingOne reliable technique to assess content is content heatmapping. The way we would use it is by giving participants a task, then asking them to highlight things that are clear or confusing. We could define any other dimensions or style lenses as well: e.g., phrases that bring more confidence and less confidence.Then we map all highlights into a heatmap to identify patterns and trends. You could run it with print-outs in person, but it could also happen in Figjam or in Miro remotely as long as your tool of choice has a highlighter feature.Run Moderated Testing SessionsThese little techniques above help you discover content issues, but they dont tell you what is missing in the content and what doubts, concerns, and issues users have with it. For that, we need to uncover user needs in more detail.Too often, users say that a page is clear and well-organized, but when you ask them specific questions, you notice that their understanding is vastly different from what you were trying to bring into spotlight.Such insights rarely surface in unmoderated sessions its much more effective to observe behavior and ask questions on the spot, be it in person or remote.Test Concepts, Not WordsBefore testing, we need to know what we want to learn. First, write up a plan with goals, customers, questions, script. Dont tweak words alone broader is better. In the session, avoid speaking aloud as its usually not how people consume content. Ask questions and wait silently.After the task is completed, ask users to explain the product, flow, and concepts to you. But: dont ask them what they like, prefer, feel, or think. And whenever possible, avoid the word content in testing as users often perceive it differently.Choosing The Right Way To TestThere are plenty of different tests that you could use:Banana test Replace key actions with bananas, ask to explain.Cloze test Remove words from your copy, ask users to fill in the blanks.Reaction cards Write up emotions on 25 cards, ask users to choose.Card sorting Ask users to group topics into meaningful categories.Highlighting Ask users to highlight helpful or confusing words.Competitive testing Ask users to explain competitors pages.When choosing the right way to test, consider the following guidelines:Do users understand?Interviews, highlighting, Cloze testDo we match the mental model?Banana testing, Cloze testWhat word works best?Card sorting, A/B testing, tree testingWhy doesnt it work?Interviews, highlighting, walkthroughsDo we know user needs?Competitive testing, process mappingWrapping UpIn many tasks, there is rarely anything more impactful than the careful selection of words on a page. However, its not only the words alone that are being used but the voice and tone that you choose to communicate with customers.Use the techniques above to test and measure how well people perceive content but also check how they perceive the end-to-end experience on the site.Quite often, the right words used incorrectly on a key page can convey a wrong message or provide a suboptimal experience. Even though the rest of the product might perform remarkably well, if a user is blocked on a critical page, they will be gone before you even blink.Useful ResourcesPractical Guide To Content Testing, by IntuitHow To Test Content With Users, by Kate MoranFive Fun Ways To Test Words, by John SaitoA Simple Technique For Evaluating Content, by Pete GaleNew: How To Measure UX And Design ImpactMeet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure and show your UX impact on business. Use the code IMPACT to save 20% off today. Jump to the details. Video + UX TrainingVideo onlyVideo + UX Training$495.00 $799.00Get Video + UX Training25 video lessons (8h) + Live UX Training.100 days money-back-guarantee.Video only$250.00$395.00Get the video course25 video lessons (8h). Updated yearly.Also available as a UX Bundle with 2 video courses.
    0 التعليقات ·0 المشاركات ·176 مشاهدة
  • Time To First Byte: Beyond Server Response Time
    smashingmagazine.com
    This article is a sponsored by DebugBearLoading your website HTML quickly has a big impact on visitor experience. After all, no page content can be displayed until after the first chunk of the HTML has been loaded. Thats why the Time to First Byte (TTFB) metric is important: it measures how soon after navigation the browser starts receiving the HTML response.Generating the HTML document quickly plays a big part in minimizing TTFB delays. But actually, theres a lot more to optimizing this metric. In this article, well take a look at what else can cause poor TTFB and what you can do to fix it.What Components Make Up The Time To First Byte Metric?TTFB stands for Time to First Byte. But where does it measure from?Different tools handle this differently. Some only count the time spent sending the HTTP request and getting a response, ignoring everything else that needs to happen first before the resource can be loaded. However, when looking at Googles Core Web Vitals, TTFB starts from the time when the users start navigating to a new page. That means TTFB includes:Cross-origin redirects,Time spent connecting to the server,Same-origin redirects, andThe actual request for the HTML document.We can see an example of this in this request waterfall visualization.The server response time here is only 183 milliseconds, or about 12% of the overall TTFB metric. Half of the time is instead spent on a cross-origin redirect a separate HTTP request that returns a redirect response before we can even make the request that returns the websites HTML code. And when we make that request, most of the time is spent on establishing the server connection.Connecting to a server on the web typically takes three round trips on the network:DNS: Looking up the server IP address.TCP: Establishing a reliable connection to the server.TLS: Creating a secure encrypted connection.What Network Latency Means For Time To First ByteLets add up all the network round trips in the example above:2 server connections: 6 round trips.2 HTTP requests: 2 round trips.That means that before we even get the first response byte for our page we actually have to send data back and forth between the browser and a server eight times!Thats where network latency comes in, or network round trip time (RTT) if we look at the time it takes to send data to a server and receive a response in the browser. On a high-latency connection with a 150 millisecond RTT, making those eight round trips will take 1.2 seconds. So, even if the server always responds instantly, we cant get a TTFB lower than that number.Network latency depends a lot on the geographic distances between the visitors device and the server the browser is connecting to. You can see the impact of that in practice by running a global TTFB test on a website. Here, Ive tested a website thats hosted in Brazil. We get good TTFB scores when testing from Brazil and the US East Coast. However, visitors from Europe, Asia, or Australia wait a while for the website to load.What Content Delivery Networks Mean for Time to First ByteOne way to speed up your website is by using a Content Delivery Network (CDN). These services provide a network of globally distributed server locations. Instead of each round trip going all the way to where your web application is hosted, browsers instead connect to a nearby CDN server (called an edge node). That greatly reduces the time spent on establishing the server connection, improving your overall TTFB metric.By default, the actual HTML request still has to be sent to your web app. However, if your content isnt dynamic, you can also cache responses at the CDN edge node. That way, the request can be served entirely through the CDN instead of data traveling all across the world.If we run a TTFB test on a website that uses a CDN, we can see that each server response comes from a regional data center close to where the request was made. In many cases, we get a TTFB of under 200 milliseconds, thanks to the response already being cached at the edge node.How To Improve Time To First ByteWhat you need to do to improve your websites TTFB score depends on what its biggest contributing component is.A lot of time is spent establishing the connection: Use a global CDN.The server response is slow: Optimize your application code or cache the responseRedirects delay TTFB: Avoid chaining redirects and optimize the server returning the redirect response.Keep in mind that TTFB depends on how visitors are accessing your website. For example, if they are logged into your application, the page content probably cant be served from the cache. You may also see a spike in TTFB when running an ad campaign, as visitors are redirected through a click-tracking server.Monitor Real User Time To First ByteIf you want to get a breakdown of what TTFB looks like for different visitors on your website, you need real user monitoring. That way, you can break down how visitor location, login status, or the referrer domain impact real user experience.DebugBear can help you collect real user metrics for Time to First Byte, Google Core Web Vitals, and other page speed metrics. You can track individual TTFB components like TCP duration or redirect time and break down website performance by country, ad campaign, and more.ConclusionBy looking at everything thats involved in serving the first byte of a website to a visitor, weve seen that just reducing server response time isnt enough and often wont even be the most impactful change you can make on your website.Just because your website is fast in one location doesnt mean its fast for everyone, as website speed varies based on where the visitor is accessing your site from.Content Delivery Networks are an incredibly powerful way to improve TTFB. Even if you dont use any of their advanced features, just using their global server network saves a lot of time when establishing a server connection.
    0 التعليقات ·0 المشاركات ·166 مشاهدة
  • How I Created A Popular WordPress Theme And Coined The Term Hero Section(WithoutRealizing It)
    smashingmagazine.com
    I dont know how it is for other designers, but when I start a new project, theres always this moment where I just sit there and stare. Nothing. No idea. Empty.People often think that creativity is some kind of magic that suddenly comes out of nowhere, like a lightning strike from the sky. But I can tell you thats not how it works at least not for me. Ive learned how to hack my creativity. Its no longer random but more like a process. And one part of that process led me to create what we now call the Hero Section.The Birth Of The Hero SectionIf Im being honest, I dont even know exactly how I came up with the name Hero. It felt more like an epiphany than a conscious decision. At the time, I was working on the Brooklyn theme, and Bootstrap was gaining popularity. I wasnt a huge fan of Bootstrap, not because its bad, but because I found it more complicated to work with than writing my own CSS. Ninety-five percent of the CSS and HTML in Brooklyn is custom-written, devoid of any framework.But there was one part of Bootstrap that stuck with me: the Jumbotron class. The name felt a bit odd, but I understood its purpose to create something big and attention-grabbing. That stuck in my mind, and like lightning, the word Hero came to me.Why Hero? A hero is a figure that demands attention. Its bold, strong, and memorable, which is everything I wanted Brooklyns intro section to be. At first, I envisioned a Hero Button. Still, I realized the concept could be much broader: it could encompass the entire intro section, setting the tone for the website and drawing the visitors focus to the most important message.The term Banner was another option, but it felt generic and uninspired. A Hero, on the other hand, is a force to reckon with. So, I committed to the idea.From Banner To Hero SectionBack in 2013, most websites called their intro sections a Banner or Header. At best, youd see a single image with a title, maybe a subtitle, and a button. Sliders were also popular, cycling through multiple banners with different content. But I wanted Brooklyns intro to be more than just a banner it had to make a lasting impression.So, I redefined it:HTML StructureI named the section <section class="hero">. This wasnt just a banner or a slider; it was a Hero Section.CSS CustomizationEverything within the section followed the Hero concept: .hero-slogan, .hero-title, .hero-description, .hero-btn. I coded it all from scratch, making sure it had a cohesive and distinct identity.Marketing LanguageI didnt stop at the code. I used the word Hero everywhere, including Brooklyns documentation, the theme description, the landing page, and the featured images.At the time, Brooklyn was attracting tens of thousands of visitors per day on ThemeForest, which is the storefront I use to make the theme available for sale. It quickly became a top seller, selling like hotcakes. Naturally, people started asking, Whats a Hero Section? It was a new term, and I loved explaining the concept.The Hero Section had become sort of like a hook that made Brooklyn more alluring, and we sold a lot of copies of the theme because of it.What I Didnt Know About The Heros FutureAt the time, I intentionally used the term Hero in Brooklyns code and marketing because I wanted it to stand out. I made sure it was everywhere: in the <section> tags, in class names like .hero-title and .hero-description, and on Brooklyns landing page and product description.But honestly, I didnt realize just how big the term would become. I wasnt thinking about carving it into stone or reserving it as something unique to Brooklyn. That kind of forward-thinking wasnt on my radar back then. All I wanted was to grab attention and make Brooklyn stand out.Over time, we kept adding new variations to the Hero Section. For example, we introduced the Hero Video, allowing users to add video backgrounds to their Heroes something that felt bold and innovative at the time. We also added the Hero Slider, a simple image slider within the Hero Section, giving users more flexibility to create dynamic intros.Brooklyn even had a small Hero Builder integrated directly into the theme something I believe is still unique to this day.Looking back, its clear I missed an opportunity to cement the Hero Section as a signature feature of Brooklyn. Once I saw other authors adopting the term, I stopped emphasizing Brooklyns role in popularizing it. I thought the concept spoke for itself.How The Hero Went MainstreamOne of the most fascinating things about the Hero Section is how quickly the term caught on. Brooklyns popularity gave the Hero Section massive exposure. Designers and developers started noticing it, and soon, other theme authors began adopting the term in their products.Brooklyn wasnt just another theme. It was one of the top sellers on ThemeForest, the worlds largest marketplace for digital goods, with millions of users. And I didnt just use the term Hero once or twice I used it everywhere: descriptions, featured images, and documentation. I made sure people saw it. Before long, I noticed that more and more themes used the term to describe large intro sections in their work.Today, the Hero Section is everywhere. Its a standard in web design recognized by designers and developers worldwide. While I cant say I invented the concept, Im proud to have played a key role in bringing it into the mainstream.Lessons From Building A HeroCreating the Hero Section taught me a lot about design, creativity, and marketing. Here are the key takeaways:Start Simple: The Hero Section started as a simple idea a way to focus attention. You dont need a complex plan to create something impactful.Commit to Your Ideas: Once I decided on the term Hero, I committed to it in the code, the design, and the marketing. Consistency made it stick.Bold Names Matter: Naming the section Hero instead of Banner gave it a personality and purpose. Names can define how users perceive a design.Constantly Evolve: Adding features like the Hero Video and Hero Slider kept the concept fresh and adaptable to user needs.Dont Ignore Your Role: If you introduce something new, own it. I should have continued promoting Brooklyn as a Hero pioneer to solidify its legacy.Inspiration Isnt Magic; Its Hard WorkInspiration often comes from unexpected places. For me, it came from questioning a Bootstrap class name and reimagining it into something new. The Hero Section wasnt just a product of creative brilliance it was the result of persistence, experimentation, and a bit of luck.Whats the one element youve created that youre most proud of? Id love to hear your stories in the comments below!
    0 التعليقات ·0 المشاركات ·156 مشاهدة
  • Taking RWD To The Extreme
    smashingmagazine.com
    When Ethan Marcotte conceived RWD, web technologies were far less mature than today. As web developers, we started to grasp how to do things with floats after years of stuffing everything inside table cells. There werent many possible ways to achieve a responsive site. There were two of them: fluid grids (based on percentages) and media queries, which were a hot new thing back then.What was lacking was a real layout system that would allow us to lay things out on a page instead of improvising with floating content. We had to wait several years for Flexbox to appear. And CSS Grid followed that.Undoubtedly, new layout systems native to the browser were groundbreaking 10 years ago. They were revolutionary enough to usher in a new era. In her talk Everything You Know About Web Design Just Changed at the An Event Apart conference in 2019, Jen Simmons proposed a name for it: Intrinsic Web Design (IWD). Lets disarm that fancy word first. According to the Merriam-Webster dictionary, intrinsic means belonging to the essential nature or constitution of a thing. In other words, IWD is a natural way of doing design for the web. And that boils down to using CSS layout systems for laying out things. Thats it.It does not sound that groundbreaking on its own. But it opens a lot of possibilities that werent earlier available with float-based layouts or table ones. We got the best things from both worlds: two-dimensional layouts (like tables with their rows and columns) with wrapping abilities (like floating content when there is not enough space for it). And there are even more goodies, like mixing fixed-sized content with fluid-sized content or intentionally overlapping elements:Native layout systems are here to make the browser work for you dont hesitate to use that to your advantage.Start With Semantic HTMLHTML is the backbone of the web. Its the language that structures and formats the content for the user. And it comes with a huge bonus: it loads and displays to the user, even if CSS and JavsScript fail to load for whatever reason. In other words, the website should still make sense to the user even if the CSS that provides the layout and the JavsScript that provides the interactivity are no-shows. A website is a text document, not so different from the one you can create in a text processor, like Word or LibreWriter.Semantic HTML also provides important accessibility features, like headings that are often used by screen-reader users for navigating pages. This is why starting not just with any markup but semantic markup for meaningful structure is a crucial step to embracing native web features.Use Fluid Type With Fluid SpaceWe often need to adjust the font size of our content when the screen size changes. Smaller screens mean being able to display less content, and larger screens provide more affordance for additional content. This is why we ought to make content as fluid as possible, by which I mean the content should automatically adjust based on the screens size. A fluid typographic system optimizes the contents legibility when its being viewed in different contexts.Nowadays, we can achieve truly fluid type with one line of CSS, thanks to the clamp() function:font-size: clamp(1rem, calc(1rem + 2.5vw), 6rem);The maths involved in it goes quite above my head. Thankfully, there is a detailed article on fluid type by Adrian Bece here on Smashing Magazine and Utopia, a handy tool for doing the maths for us. But beware there be dragons! Or at least possible accessibility issues. By limiting the maximum font size, we could break the ability to zoom the text content, violating one of the WCAGs requirements (though there are ways to address that).Fortunately, fluid space is much easier to grasp: if gaps (margins) between elements are defined in font-dependent units (like rem or em), they will scale alongside the font size. Yet rest assured, there are also caveats.Always Bet On Progressive EnhancementYes, thats this over-20-year-old technique for creating web pages. And its still relevant today in 2025. Many interesting features have limited availability like cross-page view transitions. They wont work for every user, but enabling them is as simple as adding one line of CSS:@view-transition { navigation: auto; }It wont work in some browsers, but it also wont break anything. And if some browser catches up with the standard, the code is already there, and view transitions start to work in that browser on your website. Its sort of like opting into the feature when its ready.Thats progressive enhancement at its best: allowing you to make your stairs into an escalator whenever its possible.It applies to many more things in CSS (unsupported grid is just a flow layout, unsupported masonry layout is just a grid, and so on) and other web technologies.Trust The BrowserTrust it because it knows much more about how safe it is for users to surf the web. Besides, its a computer program, and computer programs are pretty good at calculating things. So instead of calculating all these breakpoints ourselves, take their helping hand and allow them to do it for you. Just give them some constraints. Make that <main> element no wider than 60 characters and no narrower than 20 characters and then relax, watching the browser make it 37 characters on some super rare viewport youve never encountered before. It Just Works.But trusting the browser also means trusting the open web. After all, these algorithms responsible for laying things out are all parts of the standards.Ditch The Physical CSSThats a bonus point from me. Layout systems introduced the concept of logical CSS. Flexbox does not have a notion of a left or right side it has a start and an end. And that way of thinking lurked into other areas of CSS, creating the whole CSS Logical Properties and Values module. After working more with layout systems, logical CSS seems much more intuitive than the old physical one. It also has at least one advantage over the old way of doing things: it works far better with internationalized content.And I know that sounds crazy, but it forces a change in thinking about websites. If you dont know the most basic information about your content (the font size), you cant really apply any concrete numbers to your layout. You can only think in ratios. If the font size equals , your heading could equal 2, the main column 60, some text input 10, and so on. This way, everything should work out with any font size and, by extension, scale up with any font size.Weve already been doing that with layout systems we allow them to work on ratios and figure out how big each part of the layout should be. And weve also been doing that with rem and em units for scaling things up depending on font size. The only thing left is to completely forget the 1rem = 16px equation and fully embrace the exciting shores of unknown dimensions.But that sort of mental shift comes with one not-so-straightforward consequence. Not setting the font size and working with the user-provided one instead fully moves the power from the web developer to the browser and, effectively, the user. And the browser can provide us with far more information about user preferences.Thanks to the modern CSS, we can respond to these things. For example, we can switch to dark mode if the user prefers one, we can limit motion if the user requests it, we can make clickable areas bigger if the device has a touch screen, and so on. By having this kind of dialogue with the browser, exchanging information (it gives us data on the user, and we give it hints on how to display our content), we empower the user in the result. The content would be displayed in the way they want. That makes our website far more inclusive and accessible.After all, the users know what they need best. If they set the default font size to 64 pixels, they would be grateful if we respected that value. We dont know why they did it (maybe they have some kind of vision impairment, or maybe they simply have a screen far away from them); we only know they did it and we respect that.And thats responsive design for me.
    0 التعليقات ·0 المشاركات ·190 مشاهدة
  • Integrations: From Simple Data Transfer To Modern Composable Architectures
    smashingmagazine.com
    This article is a sponsored by StoryblokWhen computers first started talking to each other, the methods were remarkably simple. In the early days of the Internet, systems exchanged files via FTP or communicated via raw TCP/IP sockets. This direct approach worked well for simple use cases but quickly showed its limitations as applications grew more complex.# Basic socket server exampleimport socketserver_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)server_socket.bind(('localhost', 12345))server_socket.listen(1)while True: connection, address = server_socket.accept() data = connection.recv(1024) # Process data connection.send(response)The real breakthrough in enabling complex communication between computers on a network came with the introduction of Remote Procedure Calls (RPC) in the 1980s. RPC allowed developers to call procedures on remote systems as if they were local functions, abstracting away the complexity of network communication. This pattern laid the foundation for many of the modern integration approaches we use today.At its core, RPC implements a client-server model where the client prepares and serializes a procedure call with parameters, sends the message to a remote server, the server deserializes and executes the procedure, and then sends the response back to the client.Heres a simplified example using Pythons XML-RPC.# Serverfrom xmlrpc.server import SimpleXMLRPCServerdef calculate_total(items): return sum(items)server = SimpleXMLRPCServer(("localhost", 8000))server.register_function(calculate_total)server.serve_forever()# Clientimport xmlrpc.clientproxy = xmlrpc.client.ServerProxy("http://localhost:8000/")try: result = proxy.calculate_total([1, 2, 3, 4, 5])except ConnectionError: print("Network error occurred")RPC can operate in both synchronous (blocking) and asynchronous modes.Modern implementations such as gRPC support streaming and bi-directional communication. In the example below, we define a gRPC service called Calculator with two RPC methods, Calculate, which takes a Numbers message and returns a Result message, and CalculateStream, which sends a stream of Result messages in response.// protobufservice Calculator { rpc Calculate(Numbers) returns (Result); rpc CalculateStream(Numbers) returns (stream Result);}Modern Integrations: The Rise Of Web Services And SOAThe late 1990s and early 2000s saw the emergence of Web Services and Service-Oriented Architecture (SOA). SOAP (Simple Object Access Protocol) became the standard for enterprise integration, introducing a more structured approach to system communication.<?xml version="1.0"?><soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope"> <soap:Header> </soap:Header> <soap:Body> <m:GetStockPrice xmlns:m="http://www.example.org/stock"> <m:StockName>IBM</m:StockName> </m:GetStockPrice> </soap:Body></soap:Envelope>While SOAP provided robust enterprise features, its complexity, and verbosity led to the development of simpler alternatives, especially the REST APIs that dominate Web services communication today.But REST is not alone. Lets have a look at some modern integration patterns.RESTful APIsREST (Representational State Transfer) has become the de facto standard for Web APIs, providing a simple, stateless approach to manipulating resources. Its simplicity and HTTP-based nature make it ideal for web applications.First defined by Roy Fielding in 2000 as an architectural style on top of the Webs standard protocols, its constraints align perfectly with the goals of the modern Web, such as performance, scalability, reliability, and visibility: client and server separated by an interface and loosely coupled, stateless communication, cacheable responses.In modern applications, the most common implementations of the REST protocol are based on the JSON format, which is used to encode messages for requests and responses.// Requestasync function fetchUserData() { const response = await fetch('https://api.example.com/users/123'); const userData = await response.json(); return userData;}// Response{ "id": "123", "name": "John Doe", "_links": { "self": { "href": "/users/123" }, "orders": { "href": "/users/123/orders" }, "preferences": { "href": "/users/123/preferences" } }}GraphQLGraphQL emerged from Facebooks internal development needs in 2012 before being open-sourced in 2015. Born out of the challenges of building complex mobile applications, it addressed limitations in traditional REST APIs, particularly the issues of over-fetching and under-fetching data.At its core, GraphQL is a query language and runtime that provides a type system and declarative data fetching, allowing the client to specify exactly what it wants to fetch from the server.// graphqltype User { id: ID! name: String! email: String! posts: [Post!]!}type Post { id: ID! title: String! content: String! author: User! publishDate: String!}query GetUserWithPosts { user(id: "123") { name posts(last: 3) { title publishDate } }}Often used to build complex UIs with nested data structures, mobile applications, or microservices architectures, it has proven effective at handling complex data requirements at scale and offers a growing ecosystem of tools.WebhooksModern applications often require real-time updates. For example, e-commerce apps need to update inventory levels when a purchase is made, or content management apps need to refresh cached content when a document is edited. Traditional request-response models can struggle to meet these demands because they rely on clients polling servers for updates, which is inefficient and resource-intensive.Webhooks and event-driven architectures address these needs more effectively. Webhooks let servers send real-time notifications to clients or other systems when specific events happen. This reduces the need for continuous polling. Event-driven architectures go further by decoupling application components. Services can publish and subscribe to events asynchronously, and this makes the system more scalable, responsive, and simpler.import fastify from 'fastify';const server = fastify();server.post('/webhook', async (request, reply) => { const event = request.body; if (event.type === 'content.published') { await refreshCache(); } return reply.code(200).send();});This is a simple Node.js function that uses Fastify to set up a web server. It responds to the endpoint /webhook, checks the type field of the JSON request, and refreshes a cache if the event is of type content.published.With all this background information and technical knowledge, its easier to picture the current state of web application development, where a single, monolithic app is no longer the answer to business needs, but a new paradigm has emerged: Composable Architecture.Composable Architecture And Headless CMSsThis evolution has led us to the concept of composable architecture, where applications are built by combining specialized services. This is where headless CMS solutions have a clear advantage, serving as the perfect example of how modern integration patterns come together. Headless CMS platforms separate content management from content presentation, allowing you to build specialized frontends relying on a fully-featured content backend. This decoupling facilitates content reuse, independent scaling, and the flexibility to use a dedicated technology or service for each part of the system.Take Storyblok as an example. Storyblok is a headless CMS designed to help developers build flexible, scalable, and composable applications. Content is exposed via API, REST, or GraphQL; it offers a long list of events that can trigger a webhook. Editors are happy with a great Visual Editor, where they can see changes in real time, and many integrations are available out-of-the-box via a marketplace.Imagine this ContentDeliveryService in your app, where you can interact with Storybloks REST API using the open source JS Client:import StoryblokClient from "storyblok-js-client";class ContentDeliveryService { constructor(private storyblok: StoryblokClient) {} async getPageContent(slug: string) { const { data } = await this.storyblok.get(cdn/stories/${slug}, { version: 'published', resolve_relations: 'featured-products.products' }); return data.story; } async getRelatedContent(tags: string[]) { const { data } = await this.storyblok.get('cdn/stories', { version: 'published', with_tag: tags.join(',') }); return data.stories; }}The last piece of the puzzle is a real example of integration.Again, many are already available in the Storyblok marketplace, and you can easily control them from the dashboard. However, to fully leverage the Composable Architecture, we can use the most powerful tool in the developers hand: code.Lets imagine a modern e-commerce platform that uses Storyblok as its content hub, Shopify for inventory and orders, Algolia for product search, and Stripe for payments.Once each account is set up and we have our access tokens, we could quickly build a front-end page for our store. This isnt production-ready code, but just to get a quick idea, lets use React to build the page for a single product that integrates our services.First, we should initialize our clients:import StoryblokClient from "storyblok-js-client";import { algoliasearch } from "algoliasearch";import Client from "shopify-buy";const storyblok = new StoryblokClient({ accessToken: "your_storyblok_token",});const algoliaClient = algoliasearch( "your_algolia_app_id", "your_algolia_api_key",);const shopifyClient = Client.buildClient({ domain: "your-shopify-store.myshopify.com", storefrontAccessToken: "your_storefront_access_token",});Given that we created a blok in Storyblok that holds product information such as the product_id, we could write a component that takes the productSlug, fetches the product content from Storyblok, the inventory data from Shopify, and some related products from the Algolia index:async function fetchProduct() { // get product from Storyblok const { data } = await storyblok.get(cdn/stories/${productSlug}); // fetch inventory from Shopify const shopifyInventory = await shopifyClient.product.fetch( data.story.content.product_id ); // fetch related products using Algolia const { hits } = await algoliaIndex.search("products", { filters: category:${data.story.content.category}, });}We could then set a simple component state:const [productData, setProductData] = useState(null);const [inventory, setInventory] = useState(null);const [relatedProducts, setRelatedProducts] = useState([]);useEffect(() => // ... // combine fetchProduct() with setState to update the state // ... fetchProduct();}, [productSlug]);And return a template with all our data:<h1>{productData.content.title}</h1><p>{productData.content.description}</p><h2>Price: ${inventory.variants[0].price}</h2><h3>Related Products</h3><ul> {relatedProducts.map((product) => ( <li key={product.objectID}>{product.name}</li> ))}</ul>We could then use an event-driven approach and create a server that listens to our shop events and processes the checkout with Stripe (credits to Manuel Spigolon for this tutorial):const stripe = require('stripe')module.exports = async function plugin (app, opts) { const stripeClient = stripe(app.config.STRIPE_PRIVATE_KEY) server.post('/create-checkout-session', async (request, reply) => { const session = await stripeClient.checkout.sessions.create({ line_items: [...], // from request.body mode: 'payment', success_url: "https://your-site.com/success", cancel_url: "https://your-site.com/cancel", }) return reply.redirect(303, session.url) })// ...And with this approach, each service is independent of the others, which helps us achieve our business goals (performance, scalability, flexibility) with a good developer experience and a smaller and simpler application thats easier to maintain.ConclusionThe integration between headless CMSs and modern web services represents the current and future state of high-performance web applications. By using specialized, decoupled services, developers can focus on business logic and user experience. A composable ecosystem is not only modular but also resilient to the evolving needs of the modern enterprise.These integrations highlight the importance of mastering API-driven architectures and understanding how different tools can harmoniously fit into a larger tech stack.In todays digital landscape, success lies in choosing tools that offer flexibility and efficiency, adapt to evolving demands, and create applications that are future-proof against the challenges of tomorrow.If you want to dive deeper into the integrations you can build with Storyblok and other services, check out Storybloks integrations page. You can also take your projects further by creating your own plugins with Storybloks plugin development resources.
    0 التعليقات ·0 المشاركات ·200 مشاهدة
  • Look Closer, Inspiration Lies Everywhere (February 2025 Wallpapers Edition)
    smashingmagazine.com
    As designers, we are always on the lookout for some fresh inspiration, and well, sometimes, the best inspiration lies right in front of us. With that in mind, we embarked on our wallpapers adventure more than thirteen years ago. The idea: to provide you with a new batch of beautiful and inspiring desktop wallpapers every month. This February is no exception, of course.The wallpapers in this post were designed by artists and designers from across the globe and come in versions with and without a calendar for February 2025. And since so many unique wallpaper designs have seen the light of day since we first started this monthly series, we also added some February oldies but goodies from our archives to the collection so maybe youll spot one of your almost-forgotten favorites in here, too?This wallpapers post wouldnt have been possible without the kind support of our wonderful community who tickles their creativity each month anew to keep the steady stream of wallpapers flowing. So, a huge thank-you to everyone who shared their designs with us this time around! If you too would like to get featured in one of our next wallpapers posts, please dont hesitate to submit your design. We cant wait to see what youll come up with! Happy February!You can click on every image to see a larger preview.We respect and carefully consider the ideas and motivation behind each and every artists work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werent anyhow influenced by us but rather designed from scratch by the artists themselves.Submit your wallpaper design! Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join inFall In Love With YourselfWe dedicate February to Frida Kahlo to illuminate the world with color. Fall in love with yourself, with life and then with whoever you want. Designed by Veronica Valenzuela from Spain.previewwith calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Sweet ValentineEveryone deserves a sweet Valentines Day, no matter their relationship status. Its a day to celebrate love in all its forms self-love, friendship, and the love we share with others. A little kindness or just a little chocolate can make anyone feel special, reminding us that everyone is worthy of love and joy. Designed by LibraFire from Serbia.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440MochiDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Cyber VoodooDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Pop Into FunBlow the biggest bubbles, chew on the sweetest memories, and let your inner kid shine! Celebrate Bubble Gum Day with us and share the joy of every POP! Designed by PopArt Studio from Serbia.previewwith calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440BelieveBelieve reminds us to trust ourselves and our potential. It fuels faith, even in challenges, and drives us to pursue our dreams. Belief unlocks strength to overcome obstacles and creates possibilities. Its the foundation of success, starting with the courage to believe. Designed by Hitesh Puri from Delhi, India.previewwith calendar: 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440PlantsI wanted to draw some very cozy place, both realistic and cartoonish, filled with little details. A space with a slightly unreal atmosphere that some great shops or cafes have. A mix of plants, books, bottles, and shelves seemed like a perfect fit. I must admit, it took longer to draw than most of my other pictures! But it was totally worth it. Watch the making-of. Designed by Vlad Gerasimov from Georgia.previewwithout calendar: 800x480, 800x600, 1024x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1440x960, 1600x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3072x1920, 3840x2160, 5120x2880Love Is In The PlayForget Lady and the Tramp and their spaghetti kiss, cause Snowflake and Cloudy are enjoying their bliss. The cold and chilly February weather made our kitties knit themselves a sweater. Knitting and playing, the kitties tangled in the yarn and fell in love in your neighbors barn. Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Farewell, WinterAlthough I love winter (mostly because of the fun winter sports), there are other great activities ahead. Thanks, winter, and see you next year! Designed by Igor Izhik from Canada.previewwithout calendar: 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600True LoveDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160BalloonsDesigned by Xenia Latii from Germany.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Magic Of MusicDesigned by Vlad Gerasimov from Georgia.previewwithout calendar: 800x480, 800x600, 1024x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1440x960, 1600x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3072x1920, 3840x2160, 5120x2880FebpurraryI was doodling pictures of my cat one day and decided I could turn it into a fun wallpaper because a cold, winter night in February is the perfect time for staying in and cuddling with your cat, your significant other, or both! Designed by Angelia DiAntonio from Ohio, USA.previewwithout calendar: 320x480, 800x480, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Dog Year AheadDesigned by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Good Times AheadDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Romance Beneath The WavesThe 14th of February is just around the corner. And love is in the air, water, and everywhere! Designed by Teodora Vasileva from Bulgaria.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1280x720, 1280x960, 1280x1024, 1400x1050, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440February FernsDesigned by Nathalie Ouederni from France.previewwithout calendar: 320x480, 1024x768, 1280x1024, 1440x900, 1680x1200, 1920x1200, 2560x1440The Great BeyondDesigned by Lars Pauwels from Belgium.previewwithout calendar: 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Its A Cupcake Kind Of DaySprinkles are fun, festive, and filled with love especially when topped on a cupcake! Everyone is creative in their own unique way, so why not try baking some cupcakes and decorating them for your sweetie this month? Something homemade, like a cupcake or DIY craft, is always a sweet gesture. Designed by Artsy Cupcake from the United States.previewwithout calendar: 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x800, 1280x1024, 1366x768, 1440x900, 1600x1200, 1680x1200, 1920x1200, 1920x1440, 2560x1440SnowDesigned by Elise Vanoorbeek from Belgium.previewwithout calendar: <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1024x768.jpg title="Snow - 1024x768">1024x768, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1152x864.jpg title="Snow - 1152x864">1152x864, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1280x720.jpg title="Snow - 1280x720">1280x720, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1280x800.jpg title="Snow - 1280x800">1280x800, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1280x960.jpg title="Snow - 1280x960">1280x960, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1440x900.jpg title="Snow - 1440x900">1440x900, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1600x1200.jpg title="Snow - 1600x1200">1600x1200, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1680x1050.jpg title="Snow - 1680x1050">1680x1050, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1920x1080.jpg title="Snow - 1920x1080">1920x1080, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1920x1200.jpg title="Snow - 1920x1200">1920x1200, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1920x1440.jpg title="Snow - 1920x1440">1920x1440, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-2560x1440.jpg title="Snow - 2560x1440">2560x1440, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-1366x768.jpg title="Snow - 1366x768">1366x768, <a href="https://smashingmagazine.com/files/wallpapers/feb-15/snow/nocal/feb-15-snow-nocal-2880x1800.jpg title="Snow - 2880x1800">2880x1800Share The Same Orbit!I prepared a simple and chill layout design for February called Share The Same Orbit! which suggests to share the love orbit. Designed by Valentin Keleti from Romania.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Dark TemptationA dark romantic feel, walking through the city on a dark and rainy night. Designed by Matthew Talebi from the United States.previewwithout calendar: 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Ice Cream LoveMy inspiration for this wallpaper is the biggest love someone can have in life: the love for ice cream! Designed by Zlatina Petrova from Bulgaria.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Lovely DayDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Time ThiefWho has stolen our time? Maybe the time thief, so be sure to enjoy the other 28 days of February. Designed by Colorsfera from Spain.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1260x1440, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440In Another Place At The Same TimeFebruary is the month of love par excellence, but also a different month. Perhaps because it is shorter than the rest or because it is the one that makes way for spring, but we consider it a special month. It is a perfect month to make plans because we have already finished the post-Christmas crunch and we notice that spring and summer are coming closer. That is why I like to imagine that maybe in another place someone is also making plans to travel to unknown lands. Designed by Vernica Valenzuela from Spain.previewwithout calendar: 800x480, 1024x768, 1152x864, 1280x800, 1280x960, 1440x900, 1680x1200, 1920x1080, 2560x1440French FriesDesigned by Doreen Bethge from Germany.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Frozen WorldsA view of two frozen planets, lots of blue tints. Designed by Rutger Berghmans from Belgium.previewwithout calendar: 1280x800, 1366x768, 1440x900, 1680x1050, 1920x1080, 1920x1200, 2560x1440Out There, Theres Someone Like YouI am a true believer that out there in this world there is another person who is just like us, the problem is to find her/him. Designed by Maria Keller from Mexico.previewwithout calendar: 320x480, 640x480, 640x1136, 750x1334, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1242x2208, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2880x1800Greben IcebreakerDanube is Europes second largest river, connecting ten different countries. In these cold days, when ice paralyzes rivers and closes waterways, a small but brave icebreaker called Greben (Serbian word for reef) seems stronger than winter. It cuts through the ice on erdap gorge (Iron Gate) the longest and biggest gorge in Europe thus helping the production of electricity in the power plant. This is our way to give thanks to Greben! Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440SharpI was sick recently and squinting through my blinds made a neat effect with shapes and colors. Designed by Dylan Baumann from Omaha, NE.previewwithout calendar: 320x480, 640x480, 800x600, 1024x1024, 1280x1024, 1600x1200, 1680x1200, 1920x1080, 1920x1440, 2560x1440On The Light SideDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160, printable PDFFebreweryI live in Madison, WI, which is famous for its breweries. Wisconsin even named their baseball team The Brewers. If you like beer, brats, and lots of cheese, its the place for you! Designed by Danny Gugger from the United States.previewwithout calendar: 320x480, 1020x768, 1280x800, 1280x1024, 1136x640, 2560x1440Love Angel VaderValentines Day is coming? Noooooooooooo! Designed by Ricardo Gimenes from Spain.previewwithout calendar: 320x480, 640x960, 1024x768, 1024x1024, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1050, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2880x1800Made In JapanSee the beautiful colors, precision, and the nature of Japan in one picture. Designed by Fatih Yilmaz from the Netherlands.previewwithout calendar: 1280x720, 1280x960, 1400x1050, 1440x900, 1600x1200, 1680x1200, 1920x1080, 1920x1440, 2560x1440, 3840x2160GroundhogThe Groundhog emerged from its burrow on February 2. If it is cloudy, then spring will come early, but if it is sunny, the groundhog will see its shadow, will retreat back into its burrow, and the winter weather will continue for six more weeks. Designed by Oscar Marcelo from Portugal.previewwithout calendar: 1280x720, 1280x800, 1280x960, 1280x1024, 1440x900, 1680x1050, 1920x1080, 1920x1200, 2560x1440
    0 التعليقات ·0 المشاركات ·204 مشاهدة
  • The Digital Playbook: A Crucial Counterpart To Your Design System
    smashingmagazine.com
    I recently wrote for Smashing Magazine about how UX leaders face increasing pressure to deliver more with limited resources. Let me show you how a digital playbook can help meet this challenge by enhancing our works visibility while boosting efficiency.While a design system ensures visual coherence, a digital playbook lays out the strategic and operational framework for how digital projects should be executed and managed. Heres why a digital playbook deserves a place in your organizations toolbox and what it should include to drive meaningful impact.What Is A Digital Playbook?A digital playbook is essentially your organizations handbook for navigating the complexities of digital work. As a user experience consultant, I often help organizations create tools like this to streamline their processes and improve outcomes. Its a collection of strategies, principles, and processes that provide clarity on how to handle everything from website creation to content management and beyond. Think of it as a how-to guide for all things digital.Unlike rigid rulebooks that feel constraining, youll find that a playbook evolves with your organizations unique culture and challenges. You can use it to help stakeholders learn, standardize your work, and help everybody be more effective. Let me show you how a playbook can transform the way your team works.Why You Need A Digital PlaybookHave you ever faced challenges like these?Stakeholders with conflicting expectations of what the digital team should deliver.Endless debates over project priorities and workflows that stall progress.A patchwork of tools and inconsistent policies that create confusion.Uncertainty about best practices, leading to inefficiencies and missed opportunities.Let me show you how a playbook can help you and your team in four key ways:It helps you educate your stakeholders by making digital processes transparent and building trust. Ive found that when you explain best practices clearly, everyone gets on the same page quickly.Youll streamline your processes with clear, standardized workflows. This means less confusion and faster progress on your projects.Your digital team gains more credibility as you step into a leadership role. Youll be able to show your real value to the organization.Best of all, youll reduce friction in your daily work. When everyone understands the policies, youll face fewer misunderstandings and conflicts.A digital playbook isnt just a tool; its a way to transform challenges into opportunities for greater impact.But, no doubt you are wondering, what exactly goes into a digital playbook?Key Components Of A Digital PlaybookEvery digital playbook is unique, but if youve ever wondered where to start, here are some key areas to consider. Lets walk through them together.Engaging With The Digital TeamHave you ever had people come to you too late in the process or approach you with solutions rather than explaining the underlying problems? A playbook can help mitigate these issues by providing clear guidance on:How to request a new website or content update at the right time;What information you require to do your job;What stakeholders need to consider before requesting your help.By addressing these common challenges, youre not just reducing your frustrations youre educating stakeholders and encouraging better collaboration.Digital Project LifecycleMost digital projects can feel overwhelming without a clear structure, especially for stakeholders who may not understand the intricacies of the process. Thats why its essential to communicate the key phases clearly to those requesting your teams help. For example:Discovery: Explain how your team will research goals, user needs, and requirements to ensure the project starts on solid ground.Prototyping: Highlight the importance of testing initial concepts to validate ideas before full development.Build: Detail the process of developing the final product and incorporating feedback.Launch: Set clear expectations for rolling out the project with a structured plan.Management: Clarify how the team will optimize and maintain the product over time.Retirement: Help stakeholders understand when and how to phase out outdated tools or content effectively.Ive structured the lifecycle this way to help stakeholders understand what to expect. When they know whats happening at each stage, it builds trust and helps the working relationship. Stakeholders will see exactly what role you play and how your team adds value throughout the process.Publishing Best PracticesWriting for the web isnt the same as traditional writing, and its critical for your team to help stakeholders understand the differences. Your playbook can include practical advice to guide them, such as:Planning and organizing content to align with user needs and business goals.Crafting content thats user-friendly, SEO-optimized, and designed for clarity.Maintaining accessible and high-quality standards to ensure inclusivity.By providing this guidance, you empower stakeholders to create content thats not only effective but also reflects your teams standards.Understanding Your UsersHelping stakeholders understand your audience is essential for creating user-centered experiences. Your digital playbook can support this by including:Detailed user personas that highlight specific needs and behaviors.Recommendations for tools and methods to gather and analyze user data.Practical tips for ensuring digital experiences are inclusive and accessible to all.By sharing this knowledge, your team helps stakeholders make decisions that prioritize users, ultimately leading to more successful outcomes.Recommended ResourcesStakeholders often are unaware of the wealth of resources that can help them improve their digital deliverables. Your playbook can help by recommending trusted solutions, such as:Tools that enable stakeholders to carry out their own user research and testing.Analytics tools that allow stakeholders to track the performance of their websites.A list of preferred suppliers in case stakeholders need to bring in external experts.These recommendations ensure stakeholders are equipped with reliable resources that align with your teams processes.Policies And GovernanceUncertainty about organizational policies can lead to confusion and missteps. Your playbook should provide clarity by outlining:Accessibility and inclusivity standards to ensure compliance and user satisfaction.Data privacy and security protocols to safeguard user information.Clear processes for prioritizing and governing projects to maintain focus and consistency.By setting these expectations, your team establishes a foundation of trust and accountability that stakeholders can rely on.Of course, you can have the best digital playbook in the world, but if people dont reference it, then it is a wasted opportunity.Making Your Digital Playbook StickIt falls to you and your team to ensure as many stakeholders as possible engage with your playbook. Try the following:Make It Easy to FindHow often do stakeholders struggle to find important resources? Avoid hosting the playbook in a forgotten corner of your intranet. Instead, place it front and center on a well-maintained, user-friendly site thats accessible to everyone.Keep It EngagingLets face it nobody wants to sift through walls of text. Use visuals like infographics, short explainer videos, and clear headings to make your playbook not only digestible but also enjoyable to use. Think of it as creating a resource your stakeholders will actually want to refer back to.Frame It as a ResourceA common pitfall is presenting the playbook as a rigid set of rules. Instead, position it as a helpful guide designed to make everyones work easier. Highlight how it can simplify workflows, improve outcomes, and solve real-world problems your stakeholders face daily.Share at Relevant MomentsDont wait for stakeholders to find the playbook themselves. Instead, proactively share relevant sections when theyre most needed. For example, send the discovery phase documentation when starting a new project or share content guidelines when someone is preparing to write for the website. This just-in-time approach ensures the playbooks guidance is applied when it matters most.Start Small, Then ScaleCreating a digital playbook might sound like a daunting task, but it doesnt have to be. Begin with a few core sections and expand over time. Assign ownership to a specific team or individual to ensure it remains updated and relevant.In the end, a digital playbook is an investment. It saves time, reduces conflicts, and elevates your organizations digital maturity.Just as a design system is critical for visual harmony, a digital playbook is essential for operational excellence.Further Reading On SmashingMagDesign Patterns Are A Better Way To Collaborate On Your Design System, Ben ClemensDesign Systems: Useful Examples and Resources, Cosima MielkeBuilding Components For Consumption, Not Complexity (Part 1), Luis OuriachTaking The Stress Out Of Design System Management, Masha Shaposhnikova
    0 التعليقات ·0 المشاركات ·169 مشاهدة
  • Transitioning Top-Layer Entries And The Display Property In CSS
    smashingmagazine.com
    Animating from and to display: none; was something we could only achieve with JavaScript to change classes or create other hacks. The reason why we couldnt do this in CSS is explained in the new CSS Transitions Level 2 specification:In Level 1 of this specification, transitions can only start during a style change event for elements that have a defined before-change style established by the previous style change event. That means a transition could not be started on an element that was not being rendered for the previous style change event.In simple terms, this means that we couldnt start a transition on an element that is hidden or that has just been created.What Does transition-behavior: allow-discrete Do?allow-discrete is a bit of a strange name for a CSS property value, right? We are going on about transitioning display: none, so why isnt this named transition-behavior: allow-display instead? The reason is that this does a bit more than handling the CSS display property, as there are other discrete properties in CSS. A simple rule of thumb is that discrete properties do not transition but usually flip right away between two states. Other examples of discrete properties are visibility and mix-blend-mode. Ill include an example of these at the end of this article.To summarise, setting the transition-behavior property to allow-discrete allows us to tell the browser it can swap the values of a discrete property (e.g., display, visibility, and mix-blend-mode) at the 50% mark instead of the 0% mark of a transition.What Does @starting-style Do?The @starting-style rule defines the styles of an element right before it is rendered to the page. This is highly needed in combination with transition-behavior and this is why:When an item is added to the DOM or is initially set to display: none, it needs some sort of starting style from which it needs to transition. To take the example further, popovers and dialog elements are added to a top layer which is a layer that is outside of your document flow, you can kind of look at it as a sibling of the <html> element in your pages structure. Now, when opening this dialog or popover, they get created inside that top layer, so they dont have any styles to start transitioning from, which is why we set @starting-style. Dont worry if all of this sounds a bit confusing. The demos might make it more clearly. The important thing to know is that we can give the browser something to start the animation with since it otherwise has nothing to animate from.A Note On Browser SupportAt the moment of writing, the transition-behavior is available in Chrome, Edge, Safari, and Firefox. Its the same for @starting-style, but Firefox currently does not support animating from display: none. But remember that everything in this article can be perfectly used as a progressive enhancement.Now that we have the theory of all this behind us, lets get practical. Ill be covering three use cases in this article:Animating from and to display: none in the DOM.Animating dialogs and popovers entering and exiting the top layer.More discrete properties we can handle.Animating From And To display: none In The DOMFor the first example, lets take a look at @starting-style alone. I created this demo purely to explain the magic. Imagine you want two buttons on a page to add or remove list items inside of an unordered list.This could be your starting HTML:<button type="button" class="btn-add"> Add item</button><button type="button" class="btn-remove"> Remove item</button><ul role="list"></ul>Next, we add actions that add or remove those list items. This can be any method of your choosing, but for demo purposes, I quickly wrote a bit of JavaScript for it:document.addEventListener("DOMContentLoaded", () => { const addButton = document.querySelector(".btn-add"); const removeButton = document.querySelector(".btn-remove"); const list = document.querySelector('ul[role="list"]'); addButton.addEventListener("click", () => { const newItem = document.createElement("li"); list.appendChild(newItem); }); removeButton.addEventListener("click", () => { if (list.lastElementChild) { list.lastElementChild.classList.add("removing"); setTimeout(() => { list.removeChild(list.lastElementChild); }, 200); } });});When clicking the addButton, an empty list item gets created inside of the unordered list. When clicking the removeButton, the last item gets a new .removing class and finally gets taken out of the DOM after 200ms.With this in place, we can write some CSS for our items to animate the removing part:ul { li { transition: opacity 0.2s, transform 0.2s; &.removing { opacity: 0; transform: translate(0, 50%); } } }This is great! Our .removing animation is already looking perfect, but what we were looking for here was a way to animate the entry of items coming inside of our DOM. For this, we will need to define those starting styles, as well as the final state of our list items.First, lets update the CSS to have the final state inside of that list item:ul { li { opacity: 1; transform: translate(0, 0); transition: opacity 0.2s, transform 0.2s; &.removing { opacity: 0; transform: translate(0, 50%); } } }Not much has changed, but now its up to us to let the browser know what the starting styles should be. We could set this the same way we did the .removing styles like so:ul { li { opacity: 1; transform: translate(0, 0); transition: opacity 0.2s, transform 0.2s; @starting-style { opacity: 0; transform: translate(0, 50%); } &.removing { opacity: 0; transform: translate(0, 50%); } } }Now weve let the browser know that the @starting-style should include zero opacity and be slightly nudged to the bottom using a transform. The final result is something like this:But we dont need to stop there! We could use different animations for entering and exiting. We could, for example, update our starting style to the following:@starting-style { opacity: 0; transform: translate(0, -50%);}Doing this, the items will enter from the top and exit to the bottom. See the full example in this CodePen:See the Pen @starting-style demo - up-in, down-out [forked] by utilitybend.When To Use transition-behavior: allow-discreteIn the previous example, we added and removed items from our DOM. In the next demo, we will show and hide items using the CSS display property. The basic setup is pretty much the same, except we will add eight list items to our DOM with the .hidden class attached to it: <button type="button" class="btn-add"> Show item </button> <button type="button" class="btn-remove"> Hide item </button><ul role="list"> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li> <li class="hidden"></li></ul>Once again, for demo purposes, I added a bit of JavaScript that, this time, removes the .hidden class of the next item when clicking the addButton and adds the hidden class back when clicking the removeButton:document.addEventListener("DOMContentLoaded", () => { const addButton = document.querySelector(".btn-add"); const removeButton = document.querySelector(".btn-remove"); const listItems = document.querySelectorAll('ul[role="list"] li'); let activeCount = 0; addButton.addEventListener("click", () => { if (activeCount < listItems.length) { listItems[activeCount].classList.remove("hidden"); activeCount++; } }); removeButton.addEventListener("click", () => { if (activeCount > 0) { activeCount--; listItems[activeCount].classList.add("hidden"); } });});Lets put together everything we learned so far, add a @starting-style to our items, and do the basic setup in CSS:ul { li { display: block; opacity: 1; transform: translate(0, 0); transition: opacity 0.2s, transform 0.2s; @starting-style { opacity: 0; transform: translate(0, -50%); } &.hidden { display: none; opacity: 0; transform: translate(0, 50%); } } }This time, we have added the .hidden class, set it to display: none, and added the same opacity and transform declarations as we previously did with the .removing class in the last example. As you might expect, we get a nice fade-in for our items, but removing them is still very abrupt as we set our items directly to display: none.This is where the transition-behavior property comes into play. To break it down a bit more, lets remove the transition property shorthand of our previous CSS and open it up a bit:ul { li { display: block; opacity: 1; transform: translate(0, 0); transition-property: opacity, transform; transition-duration: 0.2s; } }All that is left to do is transition the display property and set the transition-behavior property to allow-discrete:ul { li { display: block; opacity: 1; transform: translate(0, 0); transition-property: opacity, transform, display; transition-duration: 0.2s; transition-behavior: allow-discrete; /* etc. */ } }We are now animating the element from display: none, and the result is exactly as we wanted it:We can use the transition shorthand property to make our code a little less verbose:transition: opacity 0.2s, transform 0.2s, display 0.2s allow-discrete;You can add allow-discrete in there. But if you do, take note that if you declare a shorthand transition after transition-behavior, it will be overruled. So, instead of this:transition-behavior: allow-discrete;transition: opacity 0.2s, transform 0.2s, display 0.2s;we want to declare transition-behavior after the transition shorthand:transition: opacity 0.2s, transform 0.2s, display 0.2s;transition-behavior: allow-discrete;Otherwise, the transition shorthand property overrides transition-behavior.See the Pen @starting-style and transition-behavior: allow-discrete [forked] by utilitybend.Animating Dialogs And Popovers Entering And Exiting The Top LayerLets add a few use cases with dialogs and popovers. Dialogs and popovers are good examples because they get added to the top layer when opening them.What Is That Top Layer?Weve already likened the top layer to a sibling of the <html> element, but you might also think of it as a special layer that sits above everything else on a web page. It's like a transparent sheet that you can place over a drawing. Anything you draw on that sheet will be visible on top of the original drawing.The original drawing, in this example, is the DOM. This means that the top layer is out of the document flow, which provides us with a few benefits. For example, as I stated before, dialogs and popovers are added to this top layer, and that makes perfect sense because they should always be on top of everything else. No more z-index: 9999! But its more than that:z-index is irrelevant: Elements on the top layer are always on top, regardless of their z-index value.DOM hierarchy doesnt matter: An elements position in the DOM doesnt affect its stacking order on the top layer.Backdrops: We get access to a new ::backdrop pseudo-element that lets us style the area between the top layer and the DOM beneath it.Hopefully, you are starting to understand the importance of the top layer and how we can transition elements in and out of it as we would with popovers and dialogues.Transitioning The Dialog Element In The Top LayerThe following HTML contains a button that opens a <dialog> element, and that <dialog> element contains another button that closes the <dialog>. So, we have one button that opens the <dialog> and one that closes it.<button class="open-dialog" data-target="my-modal">Show dialog</button><dialog id="my-modal"> <p>Hi, there!</p> <button class="outline close-dialog" data-target="my-modal"> close </button></dialog>A lot is happening in HTML with invoker commands that will make the following step a bit easier, but for now, lets add a bit of JavaScript to make this modal actually work:// Get all open dialog buttons.const openButtons = document.querySelectorAll(".open-dialog");// Get all close dialog buttons.const closeButtons = document.querySelectorAll(".close-dialog");// Add click event listeners to open buttons.openButtons.forEach((button) =< { button.addEventListener("click", () =< { const targetId = button.getAttribute("data-target"); const dialog = document.getElementById(targetId); if (dialog) { dialog.showModal(); } });});// Add click event listeners to close buttons.closeButtons.forEach((button) =< { button.addEventListener("click", () =< { const targetId = button.getAttribute("data-target"); const dialog = document.getElementById(targetId); if (dialog) { dialog.close(); } });});Im using the following styles as a starting point. Notice how Im styling the ::backdrop as an added bonus!dialog { padding: 30px; width: 100%; max-width: 600px; background: #fff; border-radius: 8px; border: 0; box-shadow: rgba(0, 0, 0, 0.3) 0px 19px 38px, rgba(0, 0, 0, 0.22) 0px 15px 12px; &::backdrop { background-image: linear-gradient( 45deg in oklab, oklch(80% 0.4 222) 0%, oklch(35% 0.5 313) 100% ); }}This results in a pretty hard transition for the entry, meaning its not very smooth:Lets add transitions to this dialog element and the backdrop. Im going a bit faster this time because by now, you likely see the pattern and know whats happening:dialog { opacity: 0; translate: 0 30%; transition-property: opacity, translate, display; transition-duration: 0.8s; transition-behavior: allow-discrete; &[open] { opacity: 1; translate: 0 0; @starting-style { opacity: 0; translate: 0 -30%; } }}When a dialog is open, the browser slaps an open attribute on it:<dialog open> ... </dialog>And thats something else we can target with CSS, like dialog[open]. So, in this case, we need to set a @starting-style for when the dialog is in an open state.Lets add a transition for our backdrop while were at it:dialog { /* etc. */ &::backdrop { opacity: 0; transition-property: opacity; transition-duration: 1s; } &[open] { /* etc. */ &::backdrop { opacity: 0.8; @starting-style { opacity: 0; } } }}Now youre probably thinking: A-ha! But you should have added the display property and the transition-behavior: allow-discrete on the backdrop!But no, that is not the case. Even if I would change my backdrop pseudo-element to the following CSS, the result would stay the same: &::backdrop { opacity: 0; transition-property: opacity, display; transition-duration: 1s; transition-behavior: allow-discrete; }It turns out that we are working with a ::backdrop and when working with a ::backdrop, were implicitly also working with the CSS overlay property, which specifies whether an element appearing in the top layer is currently rendered in the top layer. And overlay just so happens to be another discrete property that we need to include in the transition-property declaration:dialog { /* etc. */&::backdrop { transition-property: opacity, display, overlay; /* etc. */}Unfortunately, this is currently only supported in Chromium browsers, but it can be perfectly used as a progressive enhancement.And, yes, we need to add it to the dialog styles as well:dialog { transition-property: opacity, translate, display, overlay; /* etc. */&::backdrop { transition-property: opacity, display, overlay; /* etc. */}See the Pen Dialog: starting-style, transition-behavior, overlay [forked] by utilitybend.Its pretty much the same thing for a popover instead of a dialog. Im using the same technique, only working with popovers this time:See the Pen Popover transition with @starting-style [forked] by utilitybend.Other Discrete PropertiesThere are a few other discrete properties besides the ones we covered here. If you remember the second demo, where we transitioned some items from and to display: none, the same can be achieved with the visibility property instead. This can be handy for those cases where you want items to preserve space for the elements box, even though it is invisible.So, heres the same example, only using visibility instead of display.See the Pen Transitioning the visibility property [forked] by utilitybend.The CSS mix-blend-mode property is another one that is considered discrete. To be completely honest, I cant find a good use case for a demo. But I went ahead and created a somewhat trite example where two mix-blend-modes switch right in the middle of the transition instead of right away.See the Pen Transitioning mix-blend-mode [forked] by utilitybend.Wrapping UpThats an overview of how we can transition elements in and out of the top layer! In an ideal world, we could get away without needing a completely new property like transition-behavior just to transition otherwise un-transitionable properties, but here we are, and Im glad we have it.But we also got to learn about @starting-style and how it provides browsers with a set of styles that we can apply to the start of a transition for an element thats in the top layer. Otherwise, the element has nothing to transition from at first render, and wed have no way to transition them smoothly in and out of the top layer.
    0 التعليقات ·0 المشاركات ·189 مشاهدة
  • Svelte 5 And The Future Of Frameworks: A Chat With Rich Harris
    smashingmagazine.com
    Svelte occupies a curious space within the web development world. Its been around in one form or another for eight years now, and despite being used by the likes of Apple, Spotify, IKEA, and the New York Times, it still feels like something of an upstart, maybe even a black sheep. As creator Rich Harris recently put it,If React is Taylor Swift, were more of a Phoebe Bridges. Shes critically acclaimed, and youve heard of her, but you probably cant name that many of her songs. Rich HarrisThis may be why the release of Svelte 5 in October this year felt like such a big deal. It tries to square the circle of convention and innovation. Can it remain one of the best-loved frameworks on the web while shaking off suspicions that it cant quite rub shoulders with React, Vue, and others when it comes to scalability? Whisper it, but they might just have pulled it off. The post-launch reaction has been largely glowing, with weekly npm downloads doubling compared to six months ago. Still, Im not in the predictions game. The coming months and years will be the ultimate measure of Svelte 5. And why speculate on the most pressing questions when I can just ask Rich Harris myself? He kindly took some time to chat with me about Svelte and the future of web development.Not Magic, But MagicalSvelte 5 is a ground-up rewrite. I dont want to get into the weeds here key changes are covered nicely in the migration guide but suffice it to say the big one where day-to-day users are concerned is runes. At times, magic feeling $ has given way to the more explicit $state, $derived, and $effect.A lot of the talk around Svelte 5 included the sentiment that it marks the maturation of the framework. To Harris and the Svelte team, it feels like a culmination, with lessons learned combined with aspirations to form something fresh yet familiar. This does sort of feel like a new chapter. Im trying to build something that you dont feel like you need to get a degree in it before you can be productive in it. And that seems to have been carried through with Svelte 5. Rich HarrisAlthough raw usage numbers arent everything, seeing the uptick in installations has been a welcome signal for Harris and the Svelte team.For us, success is definitely not based around adoption, though seeing the number go up and to the right gives us reassurance that were doing the right thing and were on the right track. Even if its not the goal, it is a useful indication. But success is really people building their apps with this framework and building higher quality, more resilient, more accessible apps. Rich HarrisThe tenets of a Svelte philosophy outlined by Harris earlier this year reinforce the point:The web matters.Optimise for vibes.Dont optimise for adoption.HTML, The Mother Language.Embrace progress.Numbers lie.Magical, not magic.Dream big.No one cares.Design by consensus.Click the link above to hear these expounded upon, but you get the crux. Svelte is very much a qualitative project. Although Svelte performs well in a fair few performance metrics itself, Harris has long been a critic of metrics like Lighthouse being treated as ends in themselves. Fastest doesnt necessarily mean best. At the end of the day, we are all in the business of making quality websites.Frameworks are a means to that end, and Harris sees plenty of work to be done there. Software Is BrokenEvery milestone is a cause for celebration. Its also a natural pause in which to ask, Now what? For the Svelte team, the sights seem firmly set on shoring up the quality of the web. A conclusion that we reached over the course of a recent discussion is that most software in the world is kind of terrible. Things are not good. Half the stuff on my phone just doesnt work. It fails at basic tasks. And the same is true for a lot of websites. The number of times Ive had to open DevTools to remove the disabled attribute from a button so that I can submit a form, or been unclear on whether a payment went through or not. Rich HarrisThis certainly meshes with my experience and, doubtless, countless others. Between enshittification, manipulative algorithms, and the seemingly endless influx of AI-generated slop, its hard to shake the feeling that the web is becoming increasingly decadent and depraved. So many pieces of software that we use are just terrible. Theyre just bad software. And its not because software engineers are idiots. Our main priority as toolmakers should be to enable people to build software that isnt broken. As a baseline, people should be able to build software that works. Rich HarrisThis sense of responsibility for the creation and maintenance of good software speaks to the Svelte teams holistic outlook and also looks to influence priorities going forward.Brave New WorldPart of Svelte 5 feels like a new chapter in the sense of fresh foundations. Anyone whos worked in software development or web design will tell you how much of a headache ground-up rewrites are. Rebuilding the foundations is something to celebrate when you pull it off, but it also begs the question: What are the foundations for?Harris has his eyes on the wider ecosystem around frameworks.I dont think theres a lot more to do to solve the problem of taking some changing application state and turning it into DOM, but I think theres a huge amount to be done around the ancillary problems. How do we load the data that we put in those components? Where does that data live? How do we deploy our applications? Rich HarrisIn the short to medium term, this will likely translate into some love for SvelteKit, the web application framework built around Svelte. The framework might start having opinions about authentication and databases, an official component library perhaps, and dev tools in the spirit of the Astro dev toolbar. And all these could be precursors to even bigger explorations.I want there to be a Rails or a Laravel for JavaScript. In fact, I want there to be multiple such things. And I think that at least part of Sveltes long-term goal is to be part of that. There are too many things that you need to learn in order to build a full stack application today using JavaScript. Rich HarrisWhy Dont We Have A Laravel For JavaScript? by Theo BrowneWhy We Dont Have a Laravel For JavaScript... Yet by Vince CangerOnwardAlthough Svelte has been ticking along happily for years, the release of version 5 has felt like a new lease of life for the ecosystem around it. Every day brings new and exciting projects to the front page of the /r/sveltejs subreddit, while this years Advent of Svelte has kept up a sense of momentum following the stable release.Below are just a handful of the Svelte-based projects that have caught my eye:webvm: Virtual Machine for the Web number-flow: An animated number component for React, Vue, and Sveltesveltednd: A lightweight, flexible drag and drop library for Svelte 5 applicationsThrelte 8 Despite the turbulence and inescapable sense of existential dread surrounding much tech, this feels like an exciting time for web development. The conditions are ripe for lovely new things to emerge.And as for Svelte 5 itself, what does Rich Harris say to those who might be on the fence?I would say you have nothing to lose but an afternoon if you try it. We have a tutorial that will take you from knowing nothing about Svelte or even existing frameworks. You can go from that to being able to build applications using Svelte in three or four hours. If you just want to learn Svelte basics, then thats an hour. Try it. Rich HarrisFurther Reading On SmashingMagHow To Build Server-Side Rendered (SSR) Svelte Apps With SvelteKit, Sriram ThiagarajanWeb Development Is Getting Too Complex, And It May Be Our Fault, Juan Diego RodrguezVanilla JavaScript, Libraries, And The Quest For Stateful DOM Rendering, Frederik DohrThe Hype Around Signals, Atila Fassina
    0 التعليقات ·0 المشاركات ·198 مشاهدة
  • Navigating The Challenges Of Modern Open-Source Authoring: Lessons Learned
    smashingmagazine.com
    This article is a sponsored by StoryblokOpen source is the backbone of modern software development. As someone deeply involved in both community-driven and company-driven open source, Ive had the privilege of experiencing its diverse approaches firsthand. This article dives into what modern OSS (Open Source) authoring looks like, focusing on front-end JavaScript libraries such as TresJS and tools Ive contributed to at Storyblok.But let me be clear:Theres no universal playbook for OSS. Every language, framework, and project has its own workflows, rules, and culture and thats okay. These variations are what make open source so adaptable and diverse.The Art Of OSS AuthoringAuthoring an open-source project often begins with scratching your own itch solving a problem you face as a developer. But as your experiment gains traction, the challenge shifts to addressing diverse use cases while maintaining the simplicity and focus of the original idea.Take TresJS as an example. All I wanted was to add 3D to my personal Nuxt portfolio, but at that time, there wasnt a maintained, feature-rich alternative to React Three Fiber in VueJS. So, I decided to create one. Funny enough, after two years after the librarys launch, my portfolio remains unfinished.Community-driven OSS Authoring: Lessons From TresJSContinuing with TresJS as an example of a community-driven OSS project, the community has been an integral part of its growth, offering ideas, filing issues (around 531 in total), and submitting pull requests (around 936 PRs) of which 90% eventually made it to production. As an author, this is the best thing that can happen its probably one of the biggest reasons I fell in love with open source. The continuous collaboration creates an environment where new ideas can evolve into meaningful contributions.However, it also comes with its own challenges. The more ideas come in, the harder it becomes to maintain the projects focus on its original purpose.As authors, its our responsibility to keep the vision of the library clear even if that means saying no to great ideas from the community.Over time, some of the most consistent collaborators became part of a core team, helping to share the responsibility of maintaining the library and ensuring it stays aligned with its original goals.Another crucial aspect of scaling a project, especially one like TresJS, which has grown into an ecosystem of packages, is the ability to delegate. The more the project expands, the more essential it becomes to distribute responsibilities among contributors. Delegation helps in reducing the burden of the massive workload and empowers contributors to take ownership of specific areas. As a core author, its equally important to provide the necessary tools, CI workflows, and clear conventions to make the process of contributing as simple and efficient as possible. A well-prepared foundation ensures that new and existing collaborators can focus on what truly matters pushing the project forward.Company-driven OSS Authoring: The Storyblok PerspectiveNow that weve explored the bright spots and challenges of community-driven OSS lets jump into a different realm: company-driven OSS.I had experience with inner-source and open-source in previous companies, so I already had a grasp of how OSS works in the context of a company environment. However, my most meaningful experience would come later, specifically earlier this year, when I switched my role from DevRel to a full-time Developer Experience Engineer, and I say full-time because before taking the role, I was already contributing to Storybloks SDK ecosystem.At Storyblok, open source plays a crucial role in how we engage with developers and how they seamlessly use our product with their favorite framework. Our goal is to provide the same developer experience regardless of the flavor, making the experience of using Storyblok as simple, effective, and enjoyable as possible.To achieve this, its crucial to balance the needs of the developer community which often reflect the needs of the clients they work for with the companys broader goals. One of the things I find more challenging is managing expectations. For instance, while the community may want feature requests and bug fixes to be implemented quickly, the companys priorities might dictate focusing on stability, scalability, and often strategic integrations. Clear communication and prioritization are key to maintaining healthy alignment and trust between both sides.One of the unique advantages of company-driven open source is the availability of resources:Dedicated engineering time,Infrastructure (which many OSS authors often cannot afford),Access to knowledge from internal teams like design, QA, and product management.However, this setup often comes with the challenge of dealing with legacy codebases typically written by developers who may not be familiar with OSS principles. This can lead to inconsistencies in structure, testing, and documentation that require significant refactoring before the project can align with open-source best practices.Navigating The Spectrum: Community vs. CompanyI like to think of community-driven OSS as being like jazz musicfreeform, improvised, and deeply collaborative. In contrast, company-driven OSS resembles an orchestra, with a conductor guiding the performance and ensuring all the pieces fit together seamlessly.The truth is that most OSS projects if not the vast majority exist somewhere along this spectrum. For example, TresJS began as a purely community-driven project, but as it matured and gained traction, elements of structured decision-making more typical of company-driven projects became necessary to maintain focus and scalability. Together with the core team, we defined a vision and goals for the project to ensure it continued to grow without losing sight of its original purpose.Interestingly, the reverse is also true: Company-driven OSS can benefit significantly from the fast-paced innovation seen in community-driven projects.Many of the improvements Ive introduced to the Storyblok ecosystem since joining were inspired by ideas first explored in TresJS. For instance, migrating the TresJS ecosystem to pnpm workspaces demonstrated how streamlined dependency management could improve development workflows like playgrounds and e2e an approach we gradually adapted later for Storybloks ecosystem.Similarly, transitioning Storyblok testing from Jest to Vitest, with its improved performance and developer experience, was influenced by how testing is approached in community-driven projects. Likewise, our switch from Prettier to ESLints v9 flat configuration with auto-fix helped consolidate linting and formatting into a single workflow, streamlining developer productivity.Even more granular processes, such as modernizing CI workflows, found their way into Storyblok. TresJSs evolution from a single monolithic release action to granular steps for linting, testing, and building provided a blueprint for enhancing our pipelines at Storyblok. We also adopted continuous release practices inspired by pkg.pr.new, enabling faster delivery of incremental changes and testing package releases in real client projects to gather immediate feedback before merging the PRs.That said, TresJS also benefited from my experiences at Storyblok, which had a more mature and battle-tested ecosystem, particularly in adopting automated processes. For example, we integrated Dependabot to keep dependencies up to date and used auto-merge to reduce manual intervention for minor updates, freeing up contributors time for more meaningful work. We also implemented an automatic release pipeline using GitHub Actions, inspired by Storybloks workflows, ensuring smoother and more reliable releases for the TresJS ecosystem.The Challenges of Modern OSS AuthoringThroughout this article, weve touched on several modern OSS challenges, but if one deserves the crown, its managing breaking changes and maintaining compatibility. We know how fast the pace of technology is, especially on the web, and users expect libraries and tools to keep up with the latest trends. Im not the first person to say that hype-driven development can be fun, but it is inherently risky and not your best ally when building reliable, high-performance software especially in enterprise contexts.Breaking changes exist. Thats why semantic versioning comes into play to make our lives easier. However, it is equally important to balance innovation with stability. This becomes more crucial when introducing new features or refactoring for better performance, breaking existing APIs. One key lesson Ive learned particularly during my time at Storyblok is the importance of clear communication. Changelogs, migration guides, and deprecation warnings are invaluable tools to smoothen the transition for users.A practical example:My first project as a Developer Experience Engineer was introducing @storyblok/richtext, a library for rich-text processing that (at the time of writing) sees around 172k downloads per month. The library was crafted during my time as a DevRel, but transitioning users to it from the previous rich-text implementation across the ecosystem required careful planning. Since the library would become a dependency of the fundamental JS SDK and from there propagate to all the framework SDKs together with my manager, we planned a multi-month transition with a retro-compatible period before the major release. This included communication campaigns, thorough documentation, and gradual adoption to minimize disruption.Despite these efforts, mistakes happened and thats okay. During the rich-text transition, there were instances where updates didnt arrive on time or where communication and documentation were temporarily out of sync. This led to confusion within the community, which we addressed by providing timely support on GitHub issues and Discord. These moments served as reminders that even with semantic versioning, modular architectures, and meticulous planning, OSS authoring is never perfect. Mistakes are part of the process.And that takes us to the following point.ConclusionOpen-source authoring is a journey of continuous learning. Each misstep offers a chance to improve, and each success reinforces the value of collaboration and experimentation.Theres no perfect way to do OSS, and thats the beauty of it. Every project has its own set of workflows, challenges, and quirks shaped by the community and its contributors. These differences make open source adaptable, dynamic, fun, and, above all, impactful. No matter if youre building something entirely new or contributing to an existing project, remember that progress, not perfection, is the goal.So, keep contributing, experimenting, and sharing your work. Every pull request, issue, and idea you put forward brings value &mdashp not just to your project but to the broader ecosystem.Happy coding!
    0 التعليقات ·0 المشاركات ·205 مشاهدة
  • An Ode To Side Project Time
    smashingmagazine.com
    There seemed to be a hot minute when the tech industry understood the value of idle tinkering and made a point of providing side project time as an explicit working perk. The concept endures Im lucky enough to work somewhere that has it but it seems to have been outpaced in recent years by the endless charge toward efficiency.This seems a shame. We cant optimize our way to quality solutions and original ideas. To try is a self-defeating fantasy. The value of side project time is hard to overstate, and more workplaces should not just provide it but actively encourage it. Heres why.What Is Side Project Time?Side project time pops up under different names. At the Guardian, its 10% time, for example. Whatever the name, it amounts to the same thing: dedicated space and time during working hours for people to work on pet projects, independent learning, and personal development.Google founders Larry Page and Sergey Brin famously highlighted the practice as part of the companys initial public offering in 2004, writing:We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. This empowers them to be more creative and innovative. Many of our significant advances have happened in this manner. For example, AdSense for content and Google News were both prototyped in 20% time. Most risky projects fizzle, often teaching us something. Others succeed and become attractive businesses. Larry Page and Sergey BrinThe extent to which Google still supports the practice 20 years on is hazy, and though other tech big hitters talk a good game, it doesnt seem terribly widespread. The concept threatened to become mainstream for a while but has receded.The OdeThere are countless benefits to side project time, both on an individual and corporate level. Whether your priorities are personal growth or making lines, it ought to be on your radar.IndividualsOn an individual level, side project time frees up people to explore ideas and concepts that interest them. This is good in itself. We all, of course, hope to nurture existing skills and develop new ones in our day-to-day work. Sometimes day to day work provides that. Sometimes it doesnt. In either case, side project time opens up new avenues for exploration.It is also a space in which the waters can clear. Ive previously written about the lessons of zen philosophy as they relate to pet project maintenance, with a major aspect being the value of not doing. Getting things done isnt always the same as making things better.The fog of constant activity or productivity can actually keep us from seeing better solutions to problems. Side project time makes for clearer minds to take back with us into the day-to-day grind.Dedicated side project time facilitates personal growth, exploration, and learning. This is obviously good for the individual, but for the project too, because where are the benefits going to be felt?CompaniesThere are a couple of examples of similar company outlooks Id like to highlight. One is Pixars philosophy as outlined by co-founder Ed Catmull of protecting ugly babies, i.e. rough, unformed ideas:A new thing is hard to define; its not attractive, and it requires protection. When I was a researcher at DARPA, I had protection for what was ill-defined. Every new idea in any field needs protection. Pixar is set up to protect our directors ugly baby. Ed CatmullHe goes on to point out that they must eventually stand on their own two feet if they are to step out of the sandbox, but that formative time is vital to their development.The mention of DARPA (the Defense Advanced Research Projects Agency), a research and development agency, highlights this outlook, with Bell Labs being one of its shining examples. Its work has received ten Nobel Prizes and five Turing Awards over the years.As journalist Jon Gertner summarised in The Idea Factory: Bell Labs and the Great Age of American Innovation:It is now received wisdom that innovation and competitiveness are closely linked. But Bell Labs history demonstrates that the truth is actually far more complicatedcreative environments that foster a rich exchange of ideas are far more important in eliciting new insights than are the forces of competition. Jon GertnerIts a long-term outlook. One Bell employee recalled: When I first came, there was the philosophy: look, what youre doing might not be important for ten years or twenty years, but thats fine, well be there then.The cynic might say side project time is research and development for companies without the budget allocation. Even if there is some truth to that, I think the former speaks to a more entwined culture. Its not innovation over here with these people and business as usual over there with those other people.Side project time is also a cultural statement: you and your interests are valuable here. It encourages autonomy and innovation. If we only did OKRs with proven value, then original thinking would inevitably fade away.And lets be frank: even in purely Machiavellian terms, it benefits employers. Youll be rewarded with happier, more knowledgeable employees and higher retention. You may even wind up with a surprising new product.Give It A SpinSide project time is a slow burner but an invaluable thing to cultivate. Any readers in a position to try side project time will reap the benefits in time. Some of the best things in life come from idle tinkering. Let people do their thing. Give their ideas space to grow, and they will. And they might just be brilliant.Further ReadingSide Project Programs Can Have Major Benefits for Employers by Tammy XuWhat made Bell Labs special? by Andrew Gelman (PDF)Why Bell Labs Was So Important To Innovation In The 20th Century, Forbes Googles 20% rule shows exactly how much time you should spend learning new skillsand why it works, Dorie ClarkCreativity, Inc. by Ed Catmull
    0 التعليقات ·0 المشاركات ·208 مشاهدة
  • On-Device AI: Building Smarter, Faster, And Private Applications
    smashingmagazine.com
    Its not too far-fetched to say AI is a pretty handy tool that we all rely on for everyday tasks. It handles tasks like recognizing faces, understanding or cloning speech, analyzing large data, and creating personalized app experiences, such as music playlists based on your listening habits or workout plans matched to your progress. But heres the catch:Where AI tool actually lives and does its work matters a lot.Take self-driving cars, for example. These types of cars need AI to process data from cameras, sensors, and other inputs to make split-second decisions, such as detecting obstacles or adjusting speed for sharp turns. Now, if all that processing depends on the cloud, network latency connection issues could lead to delayed responses or system failures. Thats why the AI should operate directly within the car. This ensures the car responds instantly without needing direct access to the internet.This is what we call On-Device AI (ODAI). Simply put, ODAI means AI does its job right where you are on your phone, your car, or your wearable device, and so on without a real need to connect to the cloud or internet in some cases. More precisely, this kind of setup is categorized as Embedded AI (EMAI), where the intelligence is embedded into the device itself.Okay, I mentioned ODAI and then EMAI as a subset that falls under the umbrella of ODAI. However, EMAI is slightly different from other terms you might come across, such as Edge AI, Web AI, and Cloud AI. So, whats the difference? Heres a quick breakdown: Edge AIIt refers to running AI models directly on devices instead of relying on remote servers or the cloud. A simple example of this is a security camera that can analyze footage right where it is. It processes everything locally and is close to where the data is collected.Embedded AIIn this case, AI algorithms are built inside the device or hardware itself, so it functions as if the device has its own mini AI brain. I mentioned self-driving cars earlier another example is AI-powered drones, which can monitor areas or map terrains. One of the main differences between the two is that EMAI uses dedicated chips integrated with AI models and algorithms to perform intelligent tasks locally.Cloud AIThis is when the AI lives and relies on the cloud or remote servers. When you use a language translation app, the app sends the text you want to be translated to a cloud-based server, where the AI processes it and the translation back. The entire operation happens in the cloud, so it requires an internet connection to work.Web AIThese are tools or apps that run in your browser or are part of websites or online platforms. You might see product suggestions that match your preferences based on what youve looked at or purchased before. However, these tools often rely on AI models hosted in the cloud to analyze data and generate recommendations.The main difference? Its about where the AI does the work: on your device, nearby, or somewhere far off in the cloud or web.What Makes On-Device AI UsefulOn-device AI is, first and foremost, about privacy keeping your data secure and under your control. It processes everything directly on your device, avoiding the need to send personal data to external servers (cloud). So, what exactly makes this technology worth using?Real-Time ProcessingOn-device AI processes data instantly because it doesnt need to send anything to the cloud. For example, think of a smart doorbell it recognizes a visitors face right away and notifies you. If it had to wait for cloud servers to analyze the image, thered be a delay, which wouldnt be practical for quick notifications.Enhanced Privacy and SecurityPicture this: You are opening an app using voice commands or calling a friend and receiving a summary of the conversation afterward. Your phone processes the audio data locally, and the AI system handles everything directly on your device without the help of external servers. This way, your data stays private, secure, and under your control.Offline FunctionalityA big win of ODAI is that it doesnt need the internet to work, which means it can function even in areas with poor or no connectivity. You can take modern GPS navigation systems in a car as an example; they give you turn-by-turn directions with no signal, making sure you still get where you need to go.Reduced LatencyODAI AI skips out the round trip of sending data to the cloud and waiting for a response. This means that when you make a change, like adjusting a setting, the device processes the input immediately, making your experience smoother and more responsive. The Technical Pieces Of The On-Device AI PuzzleAt its core, ODAI uses special hardware and efficient model designs to carry out tasks directly on devices like smartphones, smartwatches, and Internet of Things (IoT) gadgets. Thanks to the advances in hardware technology, AI can now work locally, especially for tasks requiring AI-specific computer processing, such as the following:Neural Processing Units (NPUs)These chips are specifically designed for AI and optimized for neural nets, deep learning, and machine learning applications. They can handle large-scale AI training efficiently while consuming minimal power.Graphics Processing Units (GPUs)Known for processing multiple tasks simultaneously, GPUs excel in speeding up AI operations, particularly with massive datasets.Heres a look at some innovative AI chips in the industry: Product Organization Key Features Spiking Neural Network Chip Indian Institute of Technology Ultra-low power consumption Hierarchical Learning Processor Ceromorphic Alternative transistor structure Intelligent Processing Units (IPUs) Graphcore Multiple products targeting end devices and cloud Katana Edge AI Synaptics Combines vision, motion, and sound detection ET-SoC-1 Chip Esperanto Technology Built on RISC-V for AI and non-AI workloads NeuRRAM CEALeti Biologically inspired neuromorphic processor based on resistive RAM (RRAM) These chips or AI accelerators show different ways to make devices more efficient, use less power, and run advanced AI tasks.Techniques For Optimizing AI ModelsCreating AI models that fit resource-constrained devices often requires combining clever hardware utilization with techniques to make models smaller and more efficient. Id like to cover a few choice examples of how teams are optimizing AI for increased performance using less energy.Metas MobileLLMMetas approach to ODAI introduced a model built specifically for smartphones. Instead of scaling traditional models, they designed MobileLLM from scratch to balance efficiency and performance. One key innovation was increasing the number of smaller layers rather than having fewer large ones. This design choice improved the models accuracy and speed while keeping it lightweight. You can try out the model either on Hugging Face or using vLLM, a library for LLM inference and serving.QuantizationThis simplifies a models internal calculations by using lower-precision numbers, such as 8-bit integers, instead of 32-bit floating-point numbers. Quantization significantly reduces memory requirements and computation costs, often with minimal impact on model accuracy.PruningNeural networks contain many weights (connections between neurons), but not all are crucial. Pruning identifies and removes less important weights, resulting in a smaller, faster model without significant accuracy loss.Matrix DecompositionLarge matrices are a core component of AI models. Matrix decomposition splits these into smaller matrices, reducing computational complexity while approximating the original models behavior.Knowledge DistillationThis technique involves training a smaller model (the student) to mimic the outputs of a larger, pre-trained model (the teacher). The smaller model learns to replicate the teachers behavior, achieving similar accuracy while being more efficient. For instance, DistilBERT successfully reduced BERTs size by 40% while retaining 97% of its performance.Technologies Used For On-Device AIWell, all the model compression techniques and specialized chips are cool because theyre what make ODAI possible. But whats even more interesting for us as developers is actually putting these tools to work. This section covers some of the key technologies and frameworks that make ODAI accessible.MediaPipe SolutionsMediaPipe Solutions is a developer toolkit for adding AI-powered features to apps and devices. It offers cross-platform, customizable tools that are optimized for running AI locally, from real-time video analysis to natural language processing.At the heart of MediaPipe Solutions is MediaPipe Tasks, a core library that lets developers deploy ML solutions with minimal code. Its designed for platforms like Android, Python, and Web/JavaScript, so you can easily integrate AI into a wide range of applications.MediaPipe also provides various specialized tasks for different AI needs:LLM Inference APIThis API runs lightweight large language models (LLMs) entirely on-device for tasks like text generation and summarization. It supports several open models like Gemma and external options like Phi-2.Object DetectionThe tool helps you Identify and locate objects in images or videos, which is ideal for real-time applications like detecting animals, people, or objects right on the device.Image SegmentationMediaPipe can also segment images, such as isolating a person from the background in a video feed, allowing it to separate objects in both single images (like photos) and continuous video streams (like live video or recorded footage).LiteRTLiteRT or Lite Runtime (previously called TensorFlow Lite) is a lightweight and high-performance runtime designed for ODAI. It supports running pre-trained models or converting TensorFlow, PyTorch, and JAX models to a LiteRT-compatible format using AI Edge tools.Model ExplorerModel Explorer is a visualization tool that helps you analyze machine learning models and graphs. It simplifies the process of preparing these models for on-device AI deployment, letting you understand the structure of your models and fine-tune them for better performance. You can use Model Explorer locally or in Colab for testing and experimenting.ExecuTorchIf youre familiar with PyTorch, ExecuTorch makes it easy to deploy models to mobile, wearables, and edge devices. Its part of the PyTorch Edge ecosystem, which supports building AI experiences for edge devices like embedded systems and microcontrollers.Large Language Models For On-Device AIGemini is a powerful AI model that doesnt just excel in processing text or images. It can also handle multiple types of data seamlessly. The best part? Its designed to work right on your devices.For on-device use, theres Gemini Nano, a lightweight version of the model. Its built to perform efficiently while keeping everything private.What can Gemini Nano do?Call Notes on Pixel devicesThis feature creates private summaries and transcripts of conversations. It works entirely on-device, ensuring privacy for everyone involved.Pixel Recorder appWith the help of Gemini Nano and AICore, the app provides an on-device summarization feature, making it easy to extract key points from recordings.TalkBackEnhances the accessibility feature on Android phones by providing clear descriptions of images, thanks to Nanos multimodal capabilities. Note: Its similar to an application we built using LLaVA in a previous article.Gemini Nano is far from the only language model designed specifically for ODAI. Ive collected a few others that are worth mentioning: Model Developer Research Paper Octopus v2 NexaAI On-device language model for super agent OpenELM Apple ML Research A significant large language model integrated within iOS to enhance application functionalities Ferret-v2 Apple Ferret-v2 significantly improves upon its predecessor, introducing enhanced visual processing capabilities and an advanced training regimen MiniCPM Tsinghua University A GPT-4V Level Multimodal LLM on Your Phone Phi-3 Microsoft Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone The Trade-Offs of Using On-Device AIBuilding AI into devices can be exciting and practical, but its not without its challenges. While you may get a lightweight, private solution for your app, there are a few compromises along the way. Heres a look at some of them: Limited ResourcesPhones, wearables, and similar devices dont have the same computing power as larger machines. This means AI models must fit within limited storage and memory while running efficiently. Additionally, running AI can drain the battery, so the models need to be optimized to balance power usage and performance.Data and UpdatesAI in devices like drones, self-driving cars, and other similar devices process data quickly, using sensors or lidar to make decisions. However, these models or the system itself dont usually get real-time updates or additional training unless they are connected to the cloud. Without these updates and regular model training, the system may struggle with new situations. BiasesBiases in training data are a common challenge in AI, and ODAI models are no exception. These biases can lead to unfair decisions or errors, like misidentifying people. For ODAI, keeping these models fair and reliable means not only addressing these biases during training but also ensuring the solutions work efficiently within the devices constraints.These aren't the only challenges of on-device AI. It's still a new and growing technology, and the small number of professionals in the field makes it harder to implement.ConclusionChoosing between on-device and cloud-based AI comes down to what your application needs most. Heres a quick comparison to make things clear: Aspect On-Device AI Cloud-Based AI Privacy Data stays on the device, ensuring privacy. Data is sent to the cloud, raising potential privacy concerns. Latency Processes instantly with no delay. Relies on internet speed, which can introduce delays. Connectivity Works offline, making it reliable in any setting. Requires a stable internet connection. Processing Power Limited by device hardware. Leverages the power of cloud servers for complex tasks. Cost No ongoing server expenses. Can incur continuous cloud infrastructure costs. For apps that need fast processing and strong privacy, ODAI is the way to go. On the other hand, cloud-based AI is better when you need more computing power and frequent updates. The choice depends on your projects needs and what matters most to you.
    0 التعليقات ·0 المشاركات ·189 مشاهدة
  • The Role Of Illustration Style In Visual Storytelling
    smashingmagazine.com
    Illustration has been used for 10,000 years. One of the first ever recorded drawings was of a hand silhouette found in Spain, that is more than 66,000 years old. Fast forward to the introduction of the internet, around 1997, illustration has gradually increased in use. Popular examples of this are Googles daily doodles and the Red Bull energy drink, both of which use funny cartoon illustrations and animations to great effect.Typically, illustration was done using pencils, chalk, pens, etchings, and paints. But now everything is possible you can do both analog and digital or mixed media styles.As an example, although photography might be the most popular method to communicate visuals, it is not automatically the best default solution. Illustration offers a wider range of styles that help companies engage and communicate with their audience. Good illustrations create a mood and bring to life ideas and concepts from the text. To put it another way, visualisation.Good illustrations can also help give life to information in a better way than just using text, numbers, or tables.How do we determine what kind of illustration or style would be best? How should illustration complement or echo your corporate identity? What will your main audience prefer? What about the content, what would suit and highlight the content best, and how would it work for the age range it is primarily for? Before we dive into the examples, lets discuss the qualities of good illustration and the importance of understanding your audience. The rubric below will help you make good choices for your audiences benefit. What Makes A Good IllustrationVisualises something from the content (something that does not exist or has been described but not visualised).Must be aesthetically pleasing, interesting, and stimulating to look at (needs to have qualities and harmonies between colour, elements, proportions, and subject matter).Must have a feel, mood, dramatic edge, or attitude (needs to create a feeling and describe or bring to life an environment).The illustration should enhance and bring to life what is described in text and word form.Explains or unpacks what is written in any surrounding text and makes it come to life in an unusual and useful way (the illustration should complement and illuminate the content so readers better understand the content).Just look at what we are more often than not presented with.The importance of knowing about different audiencesIt is really important to know and consider different audiences. Not all of us are the same and have the same physical, cognitive, education, or resources. Our writing, designs, and illustrations need to take into account users make-up and capabilities.There are some common categories of audiences:Child,Teenager,Middle-aged,Ageing,Prefer a certain style (goth, retro, modern, old fashioned, sporty, branded).Below are interesting examples of illustrations, in no particular order, that show how different styles communicate and echo different qualities and affect mood and tone.WatercolourGood for formal, classy, and sophisticated imagery that also lends itself to imaginative expression. It is a great example of texture and light that delivers a really humane and personal feel that you would not get automatically by using software.StrengthsFeeling, emotion, and sense of depth and texture.Drawing With Real-life objectsA great option for highly abstract concepts and compositions with a funny, unusual, and unreal aspect. You can do some really striking and clever stuff with this style to engage readers in your content.StrengthsConceptual play.Surreal PhotomontagePerfect for abstract hybrid illustration and photo illustration with a surreal fantasy aspect. This is a great example of merging different imagery together to create a really dramatic, scary, and visually arresting new image that fits the musicians work as well.StrengthsConceptual mixing and merging, leading to new unseen imagery.CartoonWell-suited for showing fun or humorous aspects, creating concepts with loads of wit and cleverness. New messages and forms of communication can be created with this style.StrengthsConceptual.Cartoon With Block ColourWorks well for showing fun, quirky, or humorous aspects and concepts, often with loads of wit and cleverness. The simplicity of style can be quite good for people who struggle with more advanced imagery concepts, making it quite accessible.StrengthsSimplicity and unclutteredness.Clean VectorDesigned for clean and clear illustrations that are all-encompassing and durable. Due to the nature of this illustration style, it works quite well for a wide range of people as it is not overly stylistic in one direction or another.StrengthsRealism, conceptual, and widely pleasing.Textured Vintage Clean VectorBest suited for imagining rustic imagery, echoing a vintage feel. This a great example of how texture and non-cleanliness can create and enhance the feeling of the imagery; it is very Western and old-fashioned, perfect for the core meaning of the illustration.StrengthsAged feeling and rough impression.PictogramHighly effective for clean, legible, quickly recognizable imagery and concepts, especially at small sizes as well. It is no surprise that many pictograms are to be seen in quick viewing environments such as airports and show imagery that has to work for a wide range of people.StrengthsLegibility, speed of comprehension (accessibility).Abstract GeometricA great option for visually attractive and abstract imagery and concepts. This style lends itself to much customising and experimentation from the illustrator, giving some really cool and visually striking results.StrengthsVisual stimulation and curiosity.Lithography EtchingIdeal for imagery that has an old, historic, and traditional feel. Has a great feel achieved through sketchy markings, etchings, and a greyscale colour palette. You would not automatically get this from software, but given the right context or maybe an unusual juxtaposed context (like the clash against a modern, clean, fashionable corporate identity), it could work really well.StrengthsRealism and old tradition.3D gradientIt serves as a great choice for highly realistic illustration with a friendly, widely accessible character element. This style is not overly stylistic and lends itself to being accepted by a wider range of people.StrengthsWidely acceptable and appropriate.Sci-fi Comic Book And Pop ArtIts especially useful for high-impact, bright, animated, and colourful concepts. Some really cool, almost animated graphic communication can be created with this style, which can also be put to much humorous use. The boldness and in-your-face style promote visual engagement.StrengthsAnimation.TatooWell-suited for bold block-coloured silhouettes and imagery. It is so bold and impactful, and there is still loads of detail there, creating a really cool and sharp illustration. The illustration works well in black and white and would be further enhanced with colour.StrengthsDirectness and clarity.PencilPerfect for humane, detailed imagery with plenty of feeling and character. The sketchy style highlights unusual details and lends itself to an imaginative feeling and imagery.StrengthsHumane and detailed imaginative feeling.GradientEspecially useful for highly imaginative and fantasy imagery. By using gradients and a light-to-dark color palette, the imagery really has depth and says, Take me away on a journey.StrengthsFantasy (through depth of colour) and clean feeling.CharcoalIt makes an excellent option for giving illustration a humane and tangible feel, with echoes of old historical illustrations. The murky black-and-white illustration really has an atmosphere to it.StrengthsHumane and detailed feeling.WoodcutIt offers great value for block silhouette imagery that has presence, sharpness, and impact. Is colour even needed? The black against the light background goes a long way to communicating the imagery.StrengthsStriking and clear.FashionA great option for imagery that has motion and flare to it, with a slight feminine feel. No wonder this style of illustration is used for fashion illustrations, great for expressing lines and colours with motion, and has a real fashion runway flare.StrengthsMotion and expressive flare.CaricatureIdeal for humorous imagery and illustration with a graphic edge and clarity. The layering of light and dark elements really creates an illustration with depth, perfect for playing with the detail of the character, not something you would automatically get from a clean vector illustration. It has received more thought and attention than clean vector illustration typically does.StrengthsDetail and humour.PaintIt serves as a great choice for traditional romantic imagery that has loads of detail, texture, and depth of feeling. The rose flowers are a good example of this illustration style because they have so much detail and colour shades.StrengthsTradition and emotions.ChalkWell-suited for highly sketchy imagery to make something an idea or working concept. The white lines against the black background have an almost animated effect and give the illustrations real movement and life. This style is a good example of using pure lines in illustration but to great effect.StrengthsHand-realised and animation.Illustration Sample CardHow To Start Doing IllustrationThere are plenty of options, such as using pencils, chalk, pens, etchings, and paints, then possibly scanning in. You can also use software like Illustrator, Photoshop, Procreate, Corel Painter, Sketch, Inkscape, or Figma. But no matter what tools you choose, theres one essential ingredient youll always need, and that is a mind and vision for illustration.Recommended ResourcesAssociation of Illustrators20 Best Illustration Agents In The UK, And The Awesome Illustrators They Represent, Tom MayIts Nice ThatBehance Illustration
    0 التعليقات ·0 المشاركات ·231 مشاهدة
  • Solo Development: Learning To Let Go Of Perfection
    smashingmagazine.com
    As expected from anyone who has ever tried building anything solo, my goal was not to build an app but the app the one app thats so good you wonder how you ever survived without it. I had everything in place: wireframes, a to-do list, project structure you name it. Then I started building. Just not the product. I started with the landing page for it, which took me four days, and I hadnt even touched the apps core features yet. The idea itself was so good I had to start marketing it right away!I found myself making every detail perfect: every color, shadow, gradient, font size, margin, and padding had to be spot on. I dont even want to say how long the logo took.Spoiler:No one cares about your logo.Why did I get so stuck on something that was never even part of the core app I wanted so badly to build? Why wasnt I nagging myself to move on when I clearly needed to?The reality of solo development is that there is no one to tell you when to stop or simply say, Yo, this is good enough! Move on. Most users dont care whether a login button is yellow or green. What they want (and need) is a button that works and solves their problem when clicking it.Test Early And OftenUnnecessary tweaks, indecisive UI decisions, and perfectionism are the core reasons I spend more time on things than necessary.Like most solo developers, I also started with the hope of pushing out builds with the efficiency of a large-scale team. But it is easier said than done.When building solo, you start coding, then you maybe notice a design flaw, and you switch to fixing it, then a bug appears, and you try fixing that, and voil the day is gone. There comes a time when it hits you that, You know what? Its time to build messy. Thats when good intentions of project and product management go out the window, and thats when I find myself working by the seat of my pants rather than plowing forward with defined goals and actionable tasks that are based on good UI/UX principles, like storyboards, user personas, and basic prioritization.This realization is something you have to experience to grasp fully. The trick Ive learned is to focus on getting something out there for people to see and then work on actual feedback. In other words,Its more important to get the idea out there and iterate on it than reaching for perfection right out of the gate.Because guess what? Even if you have the greatest app idea in the world, youre never going to make it perfect until you start receiving feedback on it. Youre no mind reader as much as we all want to be one and some insights (often the most relevant) can only be received through real user feedback and analytics. Sure, your early assumptions may be correct, but how do you know until you ship them and start evaluating them?Nowadays, I like to tell others (and myself) to work from hypotheses instead of absolutes. Make an assertion, describe how you intend to test it, and then ship it. With that, you can gather relevant insights that you can use to get closer to perfection whatever that is.Strength In Recognizing WeaknessLets be real: Building a full application on your own is not an easy feat. Id say its like trying to build a house by yourself; it seems doable, but the reality is that it takes a lot more hands than the ones you have to make it happen. And not only to make it happen but to make it happen well. Theres only so much one person can do, and admitting your strengths and weaknesses up-front will serve you well by avoiding the trap that you can do it all alone.I once attempted to build a project management app alone. I knew it might be difficult, but I was confident. Within a few days, this simple project grew legs and expanded with new features like team collaboration, analytics, time tracking, and custom reports being added, many of which I was super excited to make.Building a full app takes a lot of time. Think about it; youre doing the work of a team all alone without any help. Theres no one to provide you with design assets, content, or back-end development. No stakeholder to swoop and poop on your ideas (which might be a good thing). Every decision, every line of code, and every design element is 100% on you alone.It is technically possible to build a full-featured app solo, but when you think about it, theres a reason why the concept of MVP exists. Take Instagram, for example; it wasnt launched with reels, stories, creators insights, and so on. It started with one simple thing: photo sharing.All Im trying to say is start small, launch, and let users guide the evolution of the product. And if you can recruit more hands to help, that would be even better. Just remember to leverage your strengths and reinforce your weaknesses by leaning on other peoples strengths.Yes, Think Like an MVPThe concept of a minimum viable product (MVP) has always been fascinating to me. In its simplest form, it means building the basic version of your idea that technically works and getting it in front of users. Yes, this is such a straightforward and widely distributed tip, but its still one of the hardest principles for solo developers to follow, particularly for me.I mentioned earlier that my genius app idea grew legs. And lots of them. I had more ideas than I knew what to do with, and I hadnt even written a reasonable amount of code! Sure, this app could be enhanced to support face ID, dark mode, advanced security, real-time results, and a bunch of other features. But all these could take months of development for an app that youre not even certain users want.Ive learned to ask myself: What would this project look like if it was easy to build?. Its so surreal how the answer almost always aligns with what users want. If you can distill your grand idea into a single indispensable idea that does one or two things extremely well, I think youll find as I have that the final result is laser-focused on solving real user problems.Ship the simplest version first. Dark mode can wait. All you need is a well-defined idea, a hypothesis to test, and a functional prototype to validate that hypothesis; anything else is probably noise.Handle Imperfection GracefullyYou may have heard about the Ship it Fast approach to development and instantly recognize the parallels between it and what Ive discussed so far. In a sense, Ship it Fast is ultimately another way of describing an MVP: get the idea out fast and iterate on it just as quickly.Some might disagree with the ship-fast approach and consider it reckless and unprofessional, which is understandable because, as developers, we care deeply about the quality of our work. However,The ship-fast mentality is not to ignore quality but to push something out ASAP and learn from real user experiences. Ship it now perfect it later.Thats why I like to tell other developers that shipping an MVP is the safest, most professional way to approach development. It forces you to stay in scope and on task without succumbing to your whimsies. I even go so far as to make myself swear an Oath of Focus at the start of every project.I, Vayo, hereby solemnly swear (with one hand on this design blueprint) to make no changes, no additions, and no extra features until this app is fully built in all its MVP glory. I pledge to avoid the temptations of endless tweaking and the thoughts of just one more feature.Only when a completed prototype is achieved will I consider any new features, enhancements, or tweaks.Signed,Vayo, Keeper of the MVPRemember, theres no one there to hold you accountable when you develop on your own. Taking a brief moment to pause and accepting that my first version wont be flawless helps put me in the right headspace early in the project.Prioritize What MattersI have noticed that no matter what I build, theres always going to be bugs. Always. If Google still has bugs in the Google Notes app, trust me, then its fine for a solo developer to accept that bugs will always be a part of any project.Look at flaky tests. For instance, you could run a test over 1,000 times and get all greens, and then the next day, you run the same test, an error shows. Its just the nature of software development. And for the case of endlessly adding features, it never ends either. Theres always going to be a new feature that youre excited about. The challenge is to curb some of that enthusiasm and shelve it responsibly for a later time when it makes sense to work on it.Ive learned to categorize bugs and features into two types: intrusive and non-intrusive. Intrusive are those things that prevent projects from functioning properly until fixed, like crashes and serious errors. The non-intrusive items are silent ones. Sure, they should be fixed, but the product will work just fine and wont prevent users from getting value if they arent addressed right away.You may want to categorize your bugs and features in other ways, and Ive seen plenty of other examples, including:High value, low value;High effort, low effort;High-cost, low-cost;Need to have, nice to have.Ive even seen developers and teams use these categorizations to create some fancy priority score that considers each category. Whatever it is that helps you stay focused and on-task is going to be the right approach for you more than what specific category you use.Live With Your StackHeres a classic conundrum in development circles:Should I use React? Or NextJS? Or wait, how about Vue? I heard its more optimized. But hold on, I read that React Redux is dead and that Zustand is the new hot tool.And just like that, youve spent an entire day thinking about nothing but the tech stack youre using to build the darn thing.We all know that an average user could care less about the tech stack under the hood. Go ahead and ask your mom what tech stack WhatsApp is built on, and let me know what she says. Most times, its just us who obsesses about tech stacks, and that usually only happens when were asked to check under the hood.I have come to accept that there will always be new tech stacks released every single day with the promise of 50% performance and 10% less code. That new tool might scale better, but do I actually have a scaling problem with my current number of zero users? Probably not.My advice:Pick the tools you work with best and stick to those tools until they start working against you.Theres no use fighting something early if something you already know and use gets the job done. Basically, dont prematurely optimize or constantly chase the latest shiny object.Do Design Before The First Line of CodeI know lots of solo developers out there suck at design, and Im probably among the top 50. My design process has traditionally been to open VS Code, create a new project, and start building the idea in whatever way comes to mind. No design assets, comps, or wireframes to work with just pure, unstructured improvisation. Thats not a good idea, and its a habit Im actively trying to break. These days, I make sure to have a blueprint of what Im building before I start writing code. Once I have that, I make sure to follow through and not change anything to respect my Oath of Focus.I like how many teams call comps and wireframes project artifacts. They are pieces of evidence that provide a source of truth for how something looks and works. You might be the sort of person who works better with sets of requirements, and thats totally fine. But having some sort of documentation that you can point back to in your work is like having a turn-by-turn navigation on a long road trip its indispensable for getting where you need to go.And what if youre like me and dont pride yourself on being the best designer? Thats another opportunity to admit your weaknesses up-front and recruit help from someone with those strengths. That way, you can articulate the goal and focus on what youre good at.Give Yourself TimelinesPersonally, without deadlines, Im almost unstoppable at procrastinating. Ive started setting time limits when building any project, as it helps with procrastination and makes sure something is pushed out at a specified time. Although this wont work without accountability, I feel the two work hand in hand.I set a 23 week deadline to build a project. And no matter what, as soon as that time is up, I must post or share the work in its current state on my socials. Because of this, Im not in my comfort zone anymore because I wont want to share a half-baked project with the public; Im conditioned to work faster and get it all done. Its interesting to see the length of time you can go if you can trick your brain.I realize that this is an extreme constraint, and it may not work for you. Im just the kind of person who needs to know what my boundaries are. Setting deadlines and respecting them makes me a more disciplined developer. More than that, it makes me work efficiently because I stop overthinking things when I know I have a fixed amount of time, and that leads to faster builds.ConclusionThe best and worst thing about solo development is the solo part. Theres a lot of freedom in working alone, and that freedom can be inspiring. However, all that freedom can be intoxicating, and if left unchecked, it becomes a debilitating hindrance to productivity and progress. Thats a good reason why solo development isnt for everyone. Some folks will respond a lot better to a team environment.But if you are a solo developer, then I hope my personal experiences are helpful to you. Ive had to look hard at myself in the mirror many days to come to realize that I am not a perfect developer who can build the perfect app alone. It takes planning, discipline, and humility to make anything, especially the right app that does exactly the right thing. Ideas are cheap and easy, but stepping out of our freedom and adding our own constraints based on progress over perfection is the secret sauce that keeps us moving and spending our time on those essential things.Further Reading On SmashingMagWhats The Perfect Design Process?, Vitaly FriedmanDesign Under Constraints: Challenges, Opportunities, And Practical Strategies, Paul BoagImproving The Double Diamond Design Process, Andy BuddUnexpected Learnings From Coding Artwork Every Day For Five Years, Saskia Freeke
    0 التعليقات ·0 المشاركات ·226 مشاهدة
  • Tight Mode: Why Browsers Produce Different Performance Results
    smashingmagazine.com
    This article is a sponsored by DebugBearI was chatting with DebugBears Matt Zeunert and, in the process, he casually mentioned this thing called Tight Mode when describing how browsers fetch and prioritize resources. I wanted to nod along like I knew what he was talking about but ultimately had to ask: What the heck is Tight mode?What I got back were two artifacts, one of them being the following video of Akamai web performance expert Robin Marx speaking at We Love Speed in France a few weeks ago:Tight Mode discriminates resources, taking anything and everything marked as High and Medium priority. Everything else is constrained and left on the outside, looking in until the body is firmly attached to the document, signaling that blocking scripts have been executed. Its at that point that resources marked with Low priority are allowed in the door during the second phase of loading.Theres a big caveat to that, but well get there. The important thing to note is thatChrome And Safari Enforce Tight ModeYes, both Chrome and Safari have some working form of Tight Mode running in the background. That last image illustrates Chromes Tight Mode. Lets look at Safaris next and compare the two.Look at that! Safari discriminates High-priority resources in its initial fetch, just like Chrome, but we get wildly different loading behavior between the two browsers. Notice how Safari appears to exclude the first five PNG images marked with Medium priority where Chrome allows them. In other words, Safari makes all Medium- and Low-priority resources wait in line until all High-priority items are done loading, even though were working with the exact same HTML. You might say that Safaris behavior makes the most sense, as you can see in that last image that Chrome seemingly excludes some High-priority resources out of Tight Mode. Theres clearly some tomfoolery happening there that well get to.Wheres Firefox in all this? It doesnt take any extra tightening measures when evaluating the priority of the resources on a page. We might consider this the classic waterfall approach to fetching and loading resources.Chrome And Safari Trigger Tight Mode DifferentlyRobin makes this clear as day in his talk. Chrome and Safari are both Tight Mode proponents, yet trigger it under differing circumstances that we can outline like this: Chrome Safari Tight Mode triggered While blocking JS in the <head> is busy. While blocking JS or CSS anywhere is busy. Notice that Chrome only looks at the document <head> when prioritizing resources, and only when it involves JavaScript. Safari, meanwhile, also looks at JavaScript, but CSS as well, and anywhere those things might be located in the document regardless of whether its in the <head> or <body>. That helps explain why Chrome excludes images marked as High priority in Figure 2 from its Tight Mode implementation it only cares about JavaScript in this context.So, even if Chrome encounters a script file with fetchpriority="high" in the document body, the file is not considered a High priority and it will be loaded after the rest of the items. Safari, meanwhile, honors fetchpriority anywhere in the document. This helps explain why Chrome leaves two scripts on the table, so to speak, in Figure 2, while Safari appears to load them during Tight Mode.Thats not to say Safari isnt doing anything weird in its process. Given the following markup:<head> <!-- two high-priority scripts --> <script src="script-1.js"></script> <script src="script-1.js"></script> <!-- two low-priority scripts --> <script src="script-3.js" defer></script> <script src="script-4.js" defer></script></head><body> <!-- five low-priority scripts --> <img src="image-1.jpg"> <img src="image-2.jpg"> <img src="image-3.jpg"> <img src="image-4.jpg"> <img src="image-5.jpg"></body>you might expect that Safari would delay the two Low-priority scripts in the <head> until the five images in the <body> are downloaded. But thats not the case. Instead, Safari loads those two scripts during its version of Tight Mode.Chrome And Safari ExceptionsI mentioned earlier that Low-priority resources are loaded in during the second phase of loading after Tight Mode has been completed. But I also mentioned that theres a big caveat to that behavior. Lets touch on that now.According to Patricks article, we know that Tight Mode is the initial phase and constraints loading lower-priority resources until the body is attached to the document (essentially, after all blocking scripts in the head have been executed). But theres a second part to that definition that I left out:In tight mode, low-priority resources are only loaded if there are less than two in-flight requests at the time that they are discovered.A-ha! So, there is a way for low-priority resources to load in Tight Mode. Its when there are less than two in-flight requests happening when theyre detected.Wait, what does in-flight even mean?Thats whats meant by less than two High- or Medium-priority items being requested. Robin demonstrates this by comparing Chrome to Safari under the same conditions, where there are only two High-priority scripts and ten regular images in the mix:<head> <!-- two high-priority scripts --> <script src="script-1.js"></script> <script src="script-1.js"></script></head><body> <!-- ten low-priority images --> <img src="image-1.jpg"> <img src="image-2.jpg"> <img src="image-3.jpg"> <img src="image-4.jpg"> <img src="image-5.jpg"> <!-- rest of images --> <img src="image-10.jpg"></body>Lets look at what Safari does first because its the most straightforward approach:Nothing tricky about that, right? The two High-priority scripts are downloaded first and the 10 images flow in right after. Now lets look at Chrome:We have the two High-priority scripts loaded first, as expected. But then Chrome decides to let in the first five images with Medium priority, then excludes the last five images with Low priority. What. The. Heck.The reason is a noble one: Chrome wants to load the first five images because, presumably, the Largest Contentful Paint (LCP) is often going to be one of those images and Chrome is hedging bets that the web will be faster overall if it automatically handles some of that logic. Again, its a noble line of reasoning, even if it isnt going to be 100% accurate. It does muddy the waters, though, and makes understanding Tight Mode a lot harder when we see Medium- and Low-priority items treated as High-priority citizens.Even muddier is that Chrome appears to only accept up to two Medium-priority resources in this discriminatory process. The rest are marked with Low priority.Thats what we mean by less than two in-flight requests. If Chrome sees that only one or two items are entering Tight Mode, then it automatically prioritizes up to the first five non-critical images as an LCP optimization effort.Truth be told, Safari does something similar, but in a different context. Instead of accepting Low-priority items when there are less than two in-flight requests, Safari accepts both Medium and Low priority in Tight Mode and from anywhere in the document regardless of whether they are located in the <head> or not. The exception is any asynchronous or deferred script because, as we saw earlier, those get loaded right away anyway.How To Manipulate Tight ModeThis might make for a great follow-up article, but this is where Ill refer you directly to Robins video because his first-person research is worth consuming directly. But heres the gist:We have these high-level features that can help influence priority, including resource hints (i.e., preload and preconnect), the Fetch Priority API, and lazy-loading techniques.We can indicate fetchpriority=`"high"andfetchpriority="low"` on items.<img src="lcp-image.jpg" fetchpriority="high"><link rel="preload" href="defer.js" as="script" fetchpriority="low"Using fetchpriority="high" is one way we can get items lower in the source included in Tight Mode. Using fetchpriority="low is one way we can get items higher in the source excluded from Tight Mode.For Chrome, this works on images, asynchronous/deferred scripts, and scripts located at the bottom of the <body>.For Safari, this only works on images.Again, watch Robins talk for the full story starting around the 28:32 marker.Thats Tight ModeIts bonkers to me that there is so little information about Tight Mode floating around the web. I would expect something like this to be well-documented somewhere, certainly over at Chrome Developers or somewhere similar, but all we have is a lightweight Google Doc and a thorough presentation to paint a picture of how two of the three major browsers fetch and prioritize resources. Let me know if you have additional information that youve either published or found Id love to include them in the discussion.
    0 التعليقات ·0 المشاركات ·247 مشاهدة
المزيد من المنشورات