• A generative model for inorganic materials design
    www.nature.com
    Nature, Published online: 16 January 2025; doi:10.1038/s41586-025-08628-5XXX.
    0 Reacties ·0 aandelen ·33 Views
  • Argyria: The rare disease that turns people blue
    www.livescience.com
    Argyria is caused by a buildup of silver in the body, which discolors the skin.
    0 Reacties ·0 aandelen ·31 Views
  • Seamless Procedural Permafrost Loop
    v.redd.it
    submitted by /u/mcnull [link] [comments]
    0 Reacties ·0 aandelen ·33 Views
  • Remedy shared they discovered one of the bugs in Alan Wake 2 thanks to a speedrun. The team promised not to fix it, as it doesnt affect regular play...
    x.com
    Remedy shared they discovered one of the bugs in Alan Wake 2 thanks to a speedrun. The team promised not to fix it, as it doesnt affect regular playthroughs in any way.Here's what's broken: https://80.lv/articles/remedy-won-t-fix-a-bug-in-alan-wake-2-used-by-speedrunners/#AlanWake #alanwake2 #games #videogames #gamedev
    0 Reacties ·0 aandelen ·34 Views
  • Nintendo Switch 2 Revealed, Full Nintendo Direct Coming In April
    www.gamespot.com
    Nintendo has officially revealed the Switch 2, its successor to the wildly popular Switch console. Like the original Switch, the Switch 2 is a hybrid home and handheld gaming console, but with beefier system specs and several other improvements when compared to its predecessor. In the reveal trailer, Nintendo confirmed that the Switch 2 will be backward compatible with most physical and digital Switch games. "Certain Nintendo Switch games may not be supported on or fully compatible with Nintendo Switch 2," the company added. More details on the Switch 2 console are set to be revealed in a Nintendo Direct scheduled for April 2, 2025.The trailer also revealed the Switch 2 controllers in more detail, confirming the new magnetic Joy-Con design. Compared to the first iteration of the Joy-Cons that slid into position via rails, the Switch 2 controllers snap into place on the console. There was also a brief look at what appears to be a new Mario Kart game, but other details on the launch date, price, and system specs are still being kept under wraps for now.Continue Reading at GameSpot
    0 Reacties ·0 aandelen ·2 Views
  • Have You Heard of Delta Force?
    gamerant.com
    Delta Force is a free-to-play tactical FPS recently released on Steam and mobile by developer Team Jade. If youre looking to sink your teeth into a new shooter, but not sure if Delta Force is the one for you, then well provide you with everything youll need to know!
    0 Reacties ·0 aandelen ·3 Views
  • To Spline or not to BSpline
    gamedev.net
    Hot digity it's true. I was thinking , enforce "take from the largerNice one.
    0 Reacties ·0 aandelen ·3 Views
  • Switch 2 Nintendo Direct coming in April
    www.polygon.com
    Nintendo officially revealed its next-gen console, Nintendo Switch 2, on Thursday though the company didnt say much beyond confirming the Switch 2s existence, its surprisingly straightforward name, and a 2025 release window in a short teaser video.Thankfully, Nintendo promised to share more details about Switch 2 in a dedicated Nintendo Direct presentation thats scheduled for Wednesday, April 2. Details about the Switch 2 Nintendo Direct are still forthcoming, but this post will be update with official start time, duration, and what to expect from the April showcase.Expect Nintendo to reveal much more about the Switch 2 software lineup during its April Nintendo Direct. The only game officially confirmed for Switch 2 thus far is a new Mario Kart, but since Nintendo is taking its next-gen console on a global hands-on tour starting April 6, it will need to reveal its game lineup by then. Hopefully, the Nintendo Direct will address one big question: What will become of Metroid Prime 4?Switch 2 is slated for release sometime in 2025, possibly as early as June, based on the timing for Nintendos hands-on events for its successor to the Switch.
    0 Reacties ·0 aandelen ·3 Views
  • On-Device AI: Building Smarter, Faster, And Private Applications
    smashingmagazine.com
    Its not too far-fetched to say AI is a pretty handy tool that we all rely on for everyday tasks. It handles tasks like recognizing faces, understanding or cloning speech, analyzing large data, and creating personalized app experiences, such as music playlists based on your listening habits or workout plans matched to your progress. But heres the catch:Where AI tool actually lives and does its work matters a lot.Take self-driving cars, for example. These types of cars need AI to process data from cameras, sensors, and other inputs to make split-second decisions, such as detecting obstacles or adjusting speed for sharp turns. Now, if all that processing depends on the cloud, network latency connection issues could lead to delayed responses or system failures. Thats why the AI should operate directly within the car. This ensures the car responds instantly without needing direct access to the internet.This is what we call On-Device AI (ODAI). Simply put, ODAI means AI does its job right where you are on your phone, your car, or your wearable device, and so on without a real need to connect to the cloud or internet in some cases. More precisely, this kind of setup is categorized as Embedded AI (EMAI), where the intelligence is embedded into the device itself.Okay, I mentioned ODAI and then EMAI as a subset that falls under the umbrella of ODAI. However, EMAI is slightly different from other terms you might come across, such as Edge AI, Web AI, and Cloud AI. So, whats the difference? Heres a quick breakdown: Edge AIIt refers to running AI models directly on devices instead of relying on remote servers or the cloud. A simple example of this is a security camera that can analyze footage right where it is. It processes everything locally and is close to where the data is collected.Embedded AIIn this case, AI algorithms are built inside the device or hardware itself, so it functions as if the device has its own mini AI brain. I mentioned self-driving cars earlier another example is AI-powered drones, which can monitor areas or map terrains. One of the main differences between the two is that EMAI uses dedicated chips integrated with AI models and algorithms to perform intelligent tasks locally.Cloud AIThis is when the AI lives and relies on the cloud or remote servers. When you use a language translation app, the app sends the text you want to be translated to a cloud-based server, where the AI processes it and the translation back. The entire operation happens in the cloud, so it requires an internet connection to work.Web AIThese are tools or apps that run in your browser or are part of websites or online platforms. You might see product suggestions that match your preferences based on what youve looked at or purchased before. However, these tools often rely on AI models hosted in the cloud to analyze data and generate recommendations.The main difference? Its about where the AI does the work: on your device, nearby, or somewhere far off in the cloud or web.What Makes On-Device AI UsefulOn-device AI is, first and foremost, about privacy keeping your data secure and under your control. It processes everything directly on your device, avoiding the need to send personal data to external servers (cloud). So, what exactly makes this technology worth using?Real-Time ProcessingOn-device AI processes data instantly because it doesnt need to send anything to the cloud. For example, think of a smart doorbell it recognizes a visitors face right away and notifies you. If it had to wait for cloud servers to analyze the image, thered be a delay, which wouldnt be practical for quick notifications.Enhanced Privacy and SecurityPicture this: You are opening an app using voice commands or calling a friend and receiving a summary of the conversation afterward. Your phone processes the audio data locally, and the AI system handles everything directly on your device without the help of external servers. This way, your data stays private, secure, and under your control.Offline FunctionalityA big win of ODAI is that it doesnt need the internet to work, which means it can function even in areas with poor or no connectivity. You can take modern GPS navigation systems in a car as an example; they give you turn-by-turn directions with no signal, making sure you still get where you need to go.Reduced LatencyODAI AI skips out the round trip of sending data to the cloud and waiting for a response. This means that when you make a change, like adjusting a setting, the device processes the input immediately, making your experience smoother and more responsive. The Technical Pieces Of The On-Device AI PuzzleAt its core, ODAI uses special hardware and efficient model designs to carry out tasks directly on devices like smartphones, smartwatches, and Internet of Things (IoT) gadgets. Thanks to the advances in hardware technology, AI can now work locally, especially for tasks requiring AI-specific computer processing, such as the following:Neural Processing Units (NPUs)These chips are specifically designed for AI and optimized for neural nets, deep learning, and machine learning applications. They can handle large-scale AI training efficiently while consuming minimal power.Graphics Processing Units (GPUs)Known for processing multiple tasks simultaneously, GPUs excel in speeding up AI operations, particularly with massive datasets.Heres a look at some innovative AI chips in the industry: Product Organization Key Features Spiking Neural Network Chip Indian Institute of Technology Ultra-low power consumption Hierarchical Learning Processor Ceromorphic Alternative transistor structure Intelligent Processing Units (IPUs) Graphcore Multiple products targeting end devices and cloud Katana Edge AI Synaptics Combines vision, motion, and sound detection ET-SoC-1 Chip Esperanto Technology Built on RISC-V for AI and non-AI workloads NeuRRAM CEALeti Biologically inspired neuromorphic processor based on resistive RAM (RRAM) These chips or AI accelerators show different ways to make devices more efficient, use less power, and run advanced AI tasks.Techniques For Optimizing AI ModelsCreating AI models that fit resource-constrained devices often requires combining clever hardware utilization with techniques to make models smaller and more efficient. Id like to cover a few choice examples of how teams are optimizing AI for increased performance using less energy.Metas MobileLLMMetas approach to ODAI introduced a model built specifically for smartphones. Instead of scaling traditional models, they designed MobileLLM from scratch to balance efficiency and performance. One key innovation was increasing the number of smaller layers rather than having fewer large ones. This design choice improved the models accuracy and speed while keeping it lightweight. You can try out the model either on Hugging Face or using vLLM, a library for LLM inference and serving.QuantizationThis simplifies a models internal calculations by using lower-precision numbers, such as 8-bit integers, instead of 32-bit floating-point numbers. Quantization significantly reduces memory requirements and computation costs, often with minimal impact on model accuracy.PruningNeural networks contain many weights (connections between neurons), but not all are crucial. Pruning identifies and removes less important weights, resulting in a smaller, faster model without significant accuracy loss.Matrix DecompositionLarge matrices are a core component of AI models. Matrix decomposition splits these into smaller matrices, reducing computational complexity while approximating the original models behavior.Knowledge DistillationThis technique involves training a smaller model (the student) to mimic the outputs of a larger, pre-trained model (the teacher). The smaller model learns to replicate the teachers behavior, achieving similar accuracy while being more efficient. For instance, DistilBERT successfully reduced BERTs size by 40% while retaining 97% of its performance.Technologies Used For On-Device AIWell, all the model compression techniques and specialized chips are cool because theyre what make ODAI possible. But whats even more interesting for us as developers is actually putting these tools to work. This section covers some of the key technologies and frameworks that make ODAI accessible.MediaPipe SolutionsMediaPipe Solutions is a developer toolkit for adding AI-powered features to apps and devices. It offers cross-platform, customizable tools that are optimized for running AI locally, from real-time video analysis to natural language processing.At the heart of MediaPipe Solutions is MediaPipe Tasks, a core library that lets developers deploy ML solutions with minimal code. Its designed for platforms like Android, Python, and Web/JavaScript, so you can easily integrate AI into a wide range of applications.MediaPipe also provides various specialized tasks for different AI needs:LLM Inference APIThis API runs lightweight large language models (LLMs) entirely on-device for tasks like text generation and summarization. It supports several open models like Gemma and external options like Phi-2.Object DetectionThe tool helps you Identify and locate objects in images or videos, which is ideal for real-time applications like detecting animals, people, or objects right on the device.Image SegmentationMediaPipe can also segment images, such as isolating a person from the background in a video feed, allowing it to separate objects in both single images (like photos) and continuous video streams (like live video or recorded footage).LiteRTLiteRT or Lite Runtime (previously called TensorFlow Lite) is a lightweight and high-performance runtime designed for ODAI. It supports running pre-trained models or converting TensorFlow, PyTorch, and JAX models to a LiteRT-compatible format using AI Edge tools.Model ExplorerModel Explorer is a visualization tool that helps you analyze machine learning models and graphs. It simplifies the process of preparing these models for on-device AI deployment, letting you understand the structure of your models and fine-tune them for better performance. You can use Model Explorer locally or in Colab for testing and experimenting.ExecuTorchIf youre familiar with PyTorch, ExecuTorch makes it easy to deploy models to mobile, wearables, and edge devices. Its part of the PyTorch Edge ecosystem, which supports building AI experiences for edge devices like embedded systems and microcontrollers.Large Language Models For On-Device AIGemini is a powerful AI model that doesnt just excel in processing text or images. It can also handle multiple types of data seamlessly. The best part? Its designed to work right on your devices.For on-device use, theres Gemini Nano, a lightweight version of the model. Its built to perform efficiently while keeping everything private.What can Gemini Nano do?Call Notes on Pixel devicesThis feature creates private summaries and transcripts of conversations. It works entirely on-device, ensuring privacy for everyone involved.Pixel Recorder appWith the help of Gemini Nano and AICore, the app provides an on-device summarization feature, making it easy to extract key points from recordings.TalkBackEnhances the accessibility feature on Android phones by providing clear descriptions of images, thanks to Nanos multimodal capabilities. Note: Its similar to an application we built using LLaVA in a previous article.Gemini Nano is far from the only language model designed specifically for ODAI. Ive collected a few others that are worth mentioning: Model Developer Research Paper Octopus v2 NexaAI On-device language model for super agent OpenELM Apple ML Research A significant large language model integrated within iOS to enhance application functionalities Ferret-v2 Apple Ferret-v2 significantly improves upon its predecessor, introducing enhanced visual processing capabilities and an advanced training regimen MiniCPM Tsinghua University A GPT-4V Level Multimodal LLM on Your Phone Phi-3 Microsoft Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone The Trade-Offs of Using On-Device AIBuilding AI into devices can be exciting and practical, but its not without its challenges. While you may get a lightweight, private solution for your app, there are a few compromises along the way. Heres a look at some of them: Limited ResourcesPhones, wearables, and similar devices dont have the same computing power as larger machines. This means AI models must fit within limited storage and memory while running efficiently. Additionally, running AI can drain the battery, so the models need to be optimized to balance power usage and performance.Data and UpdatesAI in devices like drones, self-driving cars, and other similar devices process data quickly, using sensors or lidar to make decisions. However, these models or the system itself dont usually get real-time updates or additional training unless they are connected to the cloud. Without these updates and regular model training, the system may struggle with new situations. BiasesBiases in training data are a common challenge in AI, and ODAI models are no exception. These biases can lead to unfair decisions or errors, like misidentifying people. For ODAI, keeping these models fair and reliable means not only addressing these biases during training but also ensuring the solutions work efficiently within the devices constraints.These aren't the only challenges of on-device AI. It's still a new and growing technology, and the small number of professionals in the field makes it harder to implement.ConclusionChoosing between on-device and cloud-based AI comes down to what your application needs most. Heres a quick comparison to make things clear: Aspect On-Device AI Cloud-Based AI Privacy Data stays on the device, ensuring privacy. Data is sent to the cloud, raising potential privacy concerns. Latency Processes instantly with no delay. Relies on internet speed, which can introduce delays. Connectivity Works offline, making it reliable in any setting. Requires a stable internet connection. Processing Power Limited by device hardware. Leverages the power of cloud servers for complex tasks. Cost No ongoing server expenses. Can incur continuous cloud infrastructure costs. For apps that need fast processing and strong privacy, ODAI is the way to go. On the other hand, cloud-based AI is better when you need more computing power and frequent updates. The choice depends on your projects needs and what matters most to you.
    0 Reacties ·0 aandelen ·2 Views
  • Traditional Craft Meets Modern at Four Seasons Resort Tamarindo
    design-milk.com
    Standing atop a cliff overlooking Mexicos Pacific coast, a visitor might easily miss the Four Seasons Resort Tamarindo at first glance, which is precisely the point. The resorts remarkable architecture, conceived by an alliance of Mexicos most distinguished design firms, seems to emerge from, and then dissolve back into the landscape a contemporary interpretation of the regions architectural heritage that speaks to both preservation and presence.At the heart of this 157-room resort lies a dialogue between built environment and natural terrain. The collaborative team of Victor Legorreta, Mauricio Rocha, and Mario Schjetnan studied the lands undulations with archaeological precision, positioning structures along the natural contours of cliffs that hang 300 feet above the ocean. This approach echoes the site-sensitive principles of Luis Barragns mid-century works, yet pushes further into their ecological commitment to rewild the 3,000-acre natural reserve.Rather than merely importing luxury finishes, the designers engaged deeply with Mexicos rich artisanal traditions through partnerships with organizations like Taller Maya and Ensamble Artesano. The results are seen in the henequn fiber laundry hampers from Xcanchakn, Mayan cream stone bathroom accessories, and cotton hammocks handwoven by women artisans from Yaxunah. These elements not only decorate, but sustain traditional craft economies while creating authentic connections to place.The wellness complex features a 31,215-square-foot space where Oaxacan red clay walls and volcanic stone create a powerful material presence. The designers anchored the space with an enormous found stone, discovered during construction, that serves as both sculpture and symbol. A water channel leads from here to the Temazcal, tracing what the designers call a journey of rebirth.Among the three distinct dining venues, Coyul a collaboration between celebrated chef Elena Reygadas and designer Hctor Esrawe articulates a new vocabulary for contemporary Mexican restaurant design. Esrawe, best known for his work behind EWE Studio and MASA Galera, approached the restaurant as a stage where Reygadas unique culinary vision a fusion of Mexican ingredients with French and Italian techniques could unfold in physical space.Photography courtesy of Four Seasons.
    0 Reacties ·0 aandelen ·3 Views