• Switch 2 Nintendo Direct coming in April
    www.polygon.com
    Nintendo officially revealed its next-gen console, Nintendo Switch 2, on Thursday though the company didnt say much beyond confirming the Switch 2s existence, its surprisingly straightforward name, and a 2025 release window in a short teaser video.Thankfully, Nintendo promised to share more details about Switch 2 in a dedicated Nintendo Direct presentation thats scheduled for Wednesday, April 2. Details about the Switch 2 Nintendo Direct are still forthcoming, but this post will be update with official start time, duration, and what to expect from the April showcase.Expect Nintendo to reveal much more about the Switch 2 software lineup during its April Nintendo Direct. The only game officially confirmed for Switch 2 thus far is a new Mario Kart, but since Nintendo is taking its next-gen console on a global hands-on tour starting April 6, it will need to reveal its game lineup by then. Hopefully, the Nintendo Direct will address one big question: What will become of Metroid Prime 4?Switch 2 is slated for release sometime in 2025, possibly as early as June, based on the timing for Nintendos hands-on events for its successor to the Switch.
    0 Comments ·0 Shares ·63 Views
  • On-Device AI: Building Smarter, Faster, And Private Applications
    smashingmagazine.com
    Its not too far-fetched to say AI is a pretty handy tool that we all rely on for everyday tasks. It handles tasks like recognizing faces, understanding or cloning speech, analyzing large data, and creating personalized app experiences, such as music playlists based on your listening habits or workout plans matched to your progress. But heres the catch:Where AI tool actually lives and does its work matters a lot.Take self-driving cars, for example. These types of cars need AI to process data from cameras, sensors, and other inputs to make split-second decisions, such as detecting obstacles or adjusting speed for sharp turns. Now, if all that processing depends on the cloud, network latency connection issues could lead to delayed responses or system failures. Thats why the AI should operate directly within the car. This ensures the car responds instantly without needing direct access to the internet.This is what we call On-Device AI (ODAI). Simply put, ODAI means AI does its job right where you are on your phone, your car, or your wearable device, and so on without a real need to connect to the cloud or internet in some cases. More precisely, this kind of setup is categorized as Embedded AI (EMAI), where the intelligence is embedded into the device itself.Okay, I mentioned ODAI and then EMAI as a subset that falls under the umbrella of ODAI. However, EMAI is slightly different from other terms you might come across, such as Edge AI, Web AI, and Cloud AI. So, whats the difference? Heres a quick breakdown: Edge AIIt refers to running AI models directly on devices instead of relying on remote servers or the cloud. A simple example of this is a security camera that can analyze footage right where it is. It processes everything locally and is close to where the data is collected.Embedded AIIn this case, AI algorithms are built inside the device or hardware itself, so it functions as if the device has its own mini AI brain. I mentioned self-driving cars earlier another example is AI-powered drones, which can monitor areas or map terrains. One of the main differences between the two is that EMAI uses dedicated chips integrated with AI models and algorithms to perform intelligent tasks locally.Cloud AIThis is when the AI lives and relies on the cloud or remote servers. When you use a language translation app, the app sends the text you want to be translated to a cloud-based server, where the AI processes it and the translation back. The entire operation happens in the cloud, so it requires an internet connection to work.Web AIThese are tools or apps that run in your browser or are part of websites or online platforms. You might see product suggestions that match your preferences based on what youve looked at or purchased before. However, these tools often rely on AI models hosted in the cloud to analyze data and generate recommendations.The main difference? Its about where the AI does the work: on your device, nearby, or somewhere far off in the cloud or web.What Makes On-Device AI UsefulOn-device AI is, first and foremost, about privacy keeping your data secure and under your control. It processes everything directly on your device, avoiding the need to send personal data to external servers (cloud). So, what exactly makes this technology worth using?Real-Time ProcessingOn-device AI processes data instantly because it doesnt need to send anything to the cloud. For example, think of a smart doorbell it recognizes a visitors face right away and notifies you. If it had to wait for cloud servers to analyze the image, thered be a delay, which wouldnt be practical for quick notifications.Enhanced Privacy and SecurityPicture this: You are opening an app using voice commands or calling a friend and receiving a summary of the conversation afterward. Your phone processes the audio data locally, and the AI system handles everything directly on your device without the help of external servers. This way, your data stays private, secure, and under your control.Offline FunctionalityA big win of ODAI is that it doesnt need the internet to work, which means it can function even in areas with poor or no connectivity. You can take modern GPS navigation systems in a car as an example; they give you turn-by-turn directions with no signal, making sure you still get where you need to go.Reduced LatencyODAI AI skips out the round trip of sending data to the cloud and waiting for a response. This means that when you make a change, like adjusting a setting, the device processes the input immediately, making your experience smoother and more responsive. The Technical Pieces Of The On-Device AI PuzzleAt its core, ODAI uses special hardware and efficient model designs to carry out tasks directly on devices like smartphones, smartwatches, and Internet of Things (IoT) gadgets. Thanks to the advances in hardware technology, AI can now work locally, especially for tasks requiring AI-specific computer processing, such as the following:Neural Processing Units (NPUs)These chips are specifically designed for AI and optimized for neural nets, deep learning, and machine learning applications. They can handle large-scale AI training efficiently while consuming minimal power.Graphics Processing Units (GPUs)Known for processing multiple tasks simultaneously, GPUs excel in speeding up AI operations, particularly with massive datasets.Heres a look at some innovative AI chips in the industry: Product Organization Key Features Spiking Neural Network Chip Indian Institute of Technology Ultra-low power consumption Hierarchical Learning Processor Ceromorphic Alternative transistor structure Intelligent Processing Units (IPUs) Graphcore Multiple products targeting end devices and cloud Katana Edge AI Synaptics Combines vision, motion, and sound detection ET-SoC-1 Chip Esperanto Technology Built on RISC-V for AI and non-AI workloads NeuRRAM CEALeti Biologically inspired neuromorphic processor based on resistive RAM (RRAM) These chips or AI accelerators show different ways to make devices more efficient, use less power, and run advanced AI tasks.Techniques For Optimizing AI ModelsCreating AI models that fit resource-constrained devices often requires combining clever hardware utilization with techniques to make models smaller and more efficient. Id like to cover a few choice examples of how teams are optimizing AI for increased performance using less energy.Metas MobileLLMMetas approach to ODAI introduced a model built specifically for smartphones. Instead of scaling traditional models, they designed MobileLLM from scratch to balance efficiency and performance. One key innovation was increasing the number of smaller layers rather than having fewer large ones. This design choice improved the models accuracy and speed while keeping it lightweight. You can try out the model either on Hugging Face or using vLLM, a library for LLM inference and serving.QuantizationThis simplifies a models internal calculations by using lower-precision numbers, such as 8-bit integers, instead of 32-bit floating-point numbers. Quantization significantly reduces memory requirements and computation costs, often with minimal impact on model accuracy.PruningNeural networks contain many weights (connections between neurons), but not all are crucial. Pruning identifies and removes less important weights, resulting in a smaller, faster model without significant accuracy loss.Matrix DecompositionLarge matrices are a core component of AI models. Matrix decomposition splits these into smaller matrices, reducing computational complexity while approximating the original models behavior.Knowledge DistillationThis technique involves training a smaller model (the student) to mimic the outputs of a larger, pre-trained model (the teacher). The smaller model learns to replicate the teachers behavior, achieving similar accuracy while being more efficient. For instance, DistilBERT successfully reduced BERTs size by 40% while retaining 97% of its performance.Technologies Used For On-Device AIWell, all the model compression techniques and specialized chips are cool because theyre what make ODAI possible. But whats even more interesting for us as developers is actually putting these tools to work. This section covers some of the key technologies and frameworks that make ODAI accessible.MediaPipe SolutionsMediaPipe Solutions is a developer toolkit for adding AI-powered features to apps and devices. It offers cross-platform, customizable tools that are optimized for running AI locally, from real-time video analysis to natural language processing.At the heart of MediaPipe Solutions is MediaPipe Tasks, a core library that lets developers deploy ML solutions with minimal code. Its designed for platforms like Android, Python, and Web/JavaScript, so you can easily integrate AI into a wide range of applications.MediaPipe also provides various specialized tasks for different AI needs:LLM Inference APIThis API runs lightweight large language models (LLMs) entirely on-device for tasks like text generation and summarization. It supports several open models like Gemma and external options like Phi-2.Object DetectionThe tool helps you Identify and locate objects in images or videos, which is ideal for real-time applications like detecting animals, people, or objects right on the device.Image SegmentationMediaPipe can also segment images, such as isolating a person from the background in a video feed, allowing it to separate objects in both single images (like photos) and continuous video streams (like live video or recorded footage).LiteRTLiteRT or Lite Runtime (previously called TensorFlow Lite) is a lightweight and high-performance runtime designed for ODAI. It supports running pre-trained models or converting TensorFlow, PyTorch, and JAX models to a LiteRT-compatible format using AI Edge tools.Model ExplorerModel Explorer is a visualization tool that helps you analyze machine learning models and graphs. It simplifies the process of preparing these models for on-device AI deployment, letting you understand the structure of your models and fine-tune them for better performance. You can use Model Explorer locally or in Colab for testing and experimenting.ExecuTorchIf youre familiar with PyTorch, ExecuTorch makes it easy to deploy models to mobile, wearables, and edge devices. Its part of the PyTorch Edge ecosystem, which supports building AI experiences for edge devices like embedded systems and microcontrollers.Large Language Models For On-Device AIGemini is a powerful AI model that doesnt just excel in processing text or images. It can also handle multiple types of data seamlessly. The best part? Its designed to work right on your devices.For on-device use, theres Gemini Nano, a lightweight version of the model. Its built to perform efficiently while keeping everything private.What can Gemini Nano do?Call Notes on Pixel devicesThis feature creates private summaries and transcripts of conversations. It works entirely on-device, ensuring privacy for everyone involved.Pixel Recorder appWith the help of Gemini Nano and AICore, the app provides an on-device summarization feature, making it easy to extract key points from recordings.TalkBackEnhances the accessibility feature on Android phones by providing clear descriptions of images, thanks to Nanos multimodal capabilities. Note: Its similar to an application we built using LLaVA in a previous article.Gemini Nano is far from the only language model designed specifically for ODAI. Ive collected a few others that are worth mentioning: Model Developer Research Paper Octopus v2 NexaAI On-device language model for super agent OpenELM Apple ML Research A significant large language model integrated within iOS to enhance application functionalities Ferret-v2 Apple Ferret-v2 significantly improves upon its predecessor, introducing enhanced visual processing capabilities and an advanced training regimen MiniCPM Tsinghua University A GPT-4V Level Multimodal LLM on Your Phone Phi-3 Microsoft Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone The Trade-Offs of Using On-Device AIBuilding AI into devices can be exciting and practical, but its not without its challenges. While you may get a lightweight, private solution for your app, there are a few compromises along the way. Heres a look at some of them: Limited ResourcesPhones, wearables, and similar devices dont have the same computing power as larger machines. This means AI models must fit within limited storage and memory while running efficiently. Additionally, running AI can drain the battery, so the models need to be optimized to balance power usage and performance.Data and UpdatesAI in devices like drones, self-driving cars, and other similar devices process data quickly, using sensors or lidar to make decisions. However, these models or the system itself dont usually get real-time updates or additional training unless they are connected to the cloud. Without these updates and regular model training, the system may struggle with new situations. BiasesBiases in training data are a common challenge in AI, and ODAI models are no exception. These biases can lead to unfair decisions or errors, like misidentifying people. For ODAI, keeping these models fair and reliable means not only addressing these biases during training but also ensuring the solutions work efficiently within the devices constraints.These aren't the only challenges of on-device AI. It's still a new and growing technology, and the small number of professionals in the field makes it harder to implement.ConclusionChoosing between on-device and cloud-based AI comes down to what your application needs most. Heres a quick comparison to make things clear: Aspect On-Device AI Cloud-Based AI Privacy Data stays on the device, ensuring privacy. Data is sent to the cloud, raising potential privacy concerns. Latency Processes instantly with no delay. Relies on internet speed, which can introduce delays. Connectivity Works offline, making it reliable in any setting. Requires a stable internet connection. Processing Power Limited by device hardware. Leverages the power of cloud servers for complex tasks. Cost No ongoing server expenses. Can incur continuous cloud infrastructure costs. For apps that need fast processing and strong privacy, ODAI is the way to go. On the other hand, cloud-based AI is better when you need more computing power and frequent updates. The choice depends on your projects needs and what matters most to you.
    0 Comments ·0 Shares ·135 Views
  • Traditional Craft Meets Modern at Four Seasons Resort Tamarindo
    design-milk.com
    Standing atop a cliff overlooking Mexicos Pacific coast, a visitor might easily miss the Four Seasons Resort Tamarindo at first glance, which is precisely the point. The resorts remarkable architecture, conceived by an alliance of Mexicos most distinguished design firms, seems to emerge from, and then dissolve back into the landscape a contemporary interpretation of the regions architectural heritage that speaks to both preservation and presence.At the heart of this 157-room resort lies a dialogue between built environment and natural terrain. The collaborative team of Victor Legorreta, Mauricio Rocha, and Mario Schjetnan studied the lands undulations with archaeological precision, positioning structures along the natural contours of cliffs that hang 300 feet above the ocean. This approach echoes the site-sensitive principles of Luis Barragns mid-century works, yet pushes further into their ecological commitment to rewild the 3,000-acre natural reserve.Rather than merely importing luxury finishes, the designers engaged deeply with Mexicos rich artisanal traditions through partnerships with organizations like Taller Maya and Ensamble Artesano. The results are seen in the henequn fiber laundry hampers from Xcanchakn, Mayan cream stone bathroom accessories, and cotton hammocks handwoven by women artisans from Yaxunah. These elements not only decorate, but sustain traditional craft economies while creating authentic connections to place.The wellness complex features a 31,215-square-foot space where Oaxacan red clay walls and volcanic stone create a powerful material presence. The designers anchored the space with an enormous found stone, discovered during construction, that serves as both sculpture and symbol. A water channel leads from here to the Temazcal, tracing what the designers call a journey of rebirth.Among the three distinct dining venues, Coyul a collaboration between celebrated chef Elena Reygadas and designer Hctor Esrawe articulates a new vocabulary for contemporary Mexican restaurant design. Esrawe, best known for his work behind EWE Studio and MASA Galera, approached the restaurant as a stage where Reygadas unique culinary vision a fusion of Mexican ingredients with French and Italian techniques could unfold in physical space.Photography courtesy of Four Seasons.
    0 Comments ·0 Shares ·78 Views
  • These Are the Best Cardio Workouts
    lifehacker.com
    We may earn a commission from links on this page.Cardio is incredibly important for all of us. Its the bedrock of the physical activity guidelines for health, and if you already strength train, adding in cardio will make you healthier in general and better at the stuff in the gym that you care about. (And no, it wont kill your gains.)So where should you begin if youre starting (or re-starting) a cardio habit? The simple answer is that you can do anything that you enjoy, so if your favorite exercise isnt on my list below, but it meets the definition of cardio, you dont need my approvaljust go do it. But if you want some more information about your best options, read on.What counts as cardio?I have another article addressing this question in more detail, but here's the short answer. Cardio exercise is generally understood to be exercise that:Uses most of your body, or at least several large muscle groups (cycling only uses your legs, but it absolutely counts).Is rhythmic and repetitivethink of the footsteps in jogging, or the arm strokes in swimming.Can last for 10 minutes or more. Its fine to do cardio in shorter bursts, but we want to draw a distinction between things like jogging (which people often do for 30 minutes or more) and strength exercises like squats (which might be done for a set of 8 or 12 reps, and then you need to rest before you do more).Is intense enough you feel like youre working. A leisurely stroll isnt cardio, but a brisk walk could be.Cardio machines you might see in a typical gym include the treadmill, elliptical, exercise bike (all kinds), rower, and stair climber. Those all count as cardio. Strength training work doesn't count--it's still good for you, but it's a separate thing.How much cardio should I do?The American Heart Association, the World Health Organization, the CDC, and many other organizations have settled on a guideline that says your baseline should be 150 minutes or more of moderate cardio per week. (They often say exercise but if you read the fine print, they are referring specifically to cardio. Strength training is separate.) Specifically, they say you should do:150 minutes of moderate cardio per week, or75 minutes of vigorous cardio per week, orAny combination of the above (adding up to 150, with each minute of vigorous cardio counting double), orIf youre already meeting that baseline easily, you should aim for 300 minutes of moderate/150 minutes vigorous.What does 150 minutes per week look like? Here are some examples:A 30-minute walk every weekday at lunch, orA 50-minute session on a spin bike three times a week, or22 minutes of brisk walking every morning (even weekends)How hard should a cardio workout feel?If youre out of breath, feel like youre dying, and cant wait until its time to stop, youre going harder than you need to. Moderate cardio is roughly the same effort level as zone 2 cardio. It should feel like work, but not torture. Youll be breathing a little heavier than at rest, but you could still easily speak in full sentences. These workouts are easy to recover from (you dont need a rest day afterward) and youll generally feel better at the end than you did at the beginning.Vigorous cardio includes everything harder than that, covering the spectrum from a lively jog to really intense intervals. You may feel exhausted at the end of the workout. You may not be able to do this kind of workout every day. Vigorous cardio is good for you, but its often best in small doses. Endurance athletes (like runners) often aim to keep the harder stuff to 20% or less of their weekly workout time.While heart rate tracking is popular, I dont recommend using heart rate to tell the difference between your moderate and vigorous workouts, at least if you're a beginner. The heart rate zones that are built into your watch are inconsistent from device to device and they use a formula that is often wrong. Judge the difference from your breathing and your perceived effort. Moderate cardio is about a 3, maybe 4 on a scale of 1 to 10. Does it matter what kind of cardio I do?Honestly: not that much. Sometimes people seek out cardio that uses their full body, or that targets specific body parts, but thats not actually very important when it comes to the health and fitness benefits. A rowing machine uses your arms more than a spin bike, but both can provide a great cardio workout. If you want to build muscle in your arms, youre better off doing some strength exercises for your arms rather than worrying about whether your cardio workouts include your arms.The best cardio workout is whatever youll do, so the most important factors are how available the workout is to you (is there a rower at your gym?) and preference (do you like rowing?).With that huge caveat out of the way, Ill give you guys my favorite cardio workouts, and some tips for working each into your routine.The cheapest cardio workout: running (or run/walk)Lets start with what is, for many, the most accessible cardio workout of them all: stepping out of your front door and putting one foot in front of the other. (Nothing is perfect for everybody, of course, so if outdoor workouts dont fit your life, skip to the next section.)Youll need a pair of shoes that feel reasonably comfortable when you run (they do not have to be expensive running shoes) and many of us will need a sports bra. Then, just add some athletic clothes, and you have the essentials. Youll need the same basic gear for most other exercise, anyway. Nike Men's Downshifter 12 Sneaker, Black/White- Dk Smoke Grey, 9 $70.00 at Amazon $74.34 Save $4.34 Shop Now Shop Now $70.00 at Amazon $74.34 Save $4.34 You do not need a running watch or a heart rate monitor. You dont need to track your mileage or pace at all, although you may find it useful to be vaguely aware of how long your workouts are taking and to track how often you do them. That can be a note in your phone (30 minutes jog Monday) rather than buying into an app or device ecosystem.Heres a sample workout, if you dont know where to get started:Walk for the first 5 minutes, as a warmup. Start slow, and by the end, try to be at a brisk pace.Speed up a bit; try a jog or a fast walk.If you start to feel tired, slow down just a little bit. Dont return to a slow walk unless you truly have to.Speed up again when you feel ready, and repeat.Walk for the last 5 minutes as a cooldown.Over time, work toward keeping up a steady pace. A slow, steady jog is better (for most of your training) than sprint-and-walk intervals. That said, interval training is a fun thing to sprinkle in. If youre worried that running is boring, try these tips to keep it fun.Easiest on your body: indoor cyclingIf I had to crown a best all-around cardio workout, it would probably be spinning. Theres a smoother transition between speeds, rather than the distinct categories of walking and running, so its easier to find the right intensity for a given workout. Theres not much bouncing or impact, so you may not need a sports bra and you may find it easier on your knees and shins at the start. And you can do it with a water bottle and a fan within reach, which makes logistics a bit easierno need to carry everything with you.(Outdoor cycling is great, by the way. But that requires a helmet, a bit of mechanical know-how, and street smarts to safely mesh with, or avoid, traffic. Im sticking with indoor cycling for my recommendation here, but if you love taking your bike to the streets, by all means enjoy!) YOSUDA Indoor Cycling Bike Brake Pad/Magnetic Stationary Bike - Cycle Bike with Ipad Mount & Comfortable Seat Cushion $249.99 Get Deal Get Deal $249.99 There are also tons of options for indoor cycling workouts. You can aim for a straight steady-state workout, perhaps watching a favorite show while you do it on the gyms TV or even your phone. Or you can follow along with a video or audio workout that guides you through intervals while distracting you with music and chatter. Use an app like Peloton or Aaptiv, or find videos on YouTube. Heres one to start you off: Best for no equipment at home: put on some music and danceI really debated this one. Theres a lot to be said for jumping rope (even though technically that is equipment) but the pros and cons are similar to jogging. Theres a lot of bouncing and impact, and it can be pretty exhausting at first, until you learn how to pace yourself.Then we have the staples of bodyweight HIIT videos, like air squats and jumping jacks. These are fine! But they lend themselves better to intervals, and when were doing cardio its good to have options that let us move continuously. That said, Im going to put in a quick plug for the most underrated no-equipment cardio move out there: the old school four-count burpee. (I describe it in more detail here.) No jump and no pushup. Youre welcome.But ultimately, if you want to get a good cardio workout in your home without having to buy equipment or clear a big space, just put on some music and dance. And dont tell me you cant dance, because you dont need to impress an audience here. Put on something that makes you happy, and shift your weight from one foot to the other. Swing your arms a little. Look! Youre dancing! It may not look stylish, but youre getting a workout and youre probably enjoying it a lot more than burpees or squat jumps.Obviously, there are so many directions you can go from here. You can simply bop along to whatever is on the radio or shuffle your Spotify. You can work on building your skill as a dancer, learning new moves and stringing them togetherdont these goofballs look like theyre having fun dancing the Charleston? You can look up dance cardio videos where an instructor leads you through a workout. Or you can just pick any style you like and have fun with it.
    0 Comments ·0 Shares ·81 Views
  • Nintendo Switch 2 is official, with more details coming on April 2, 2025
    www.engadget.com
    The long wait is finally over. In a YouTube video with little fanfare, Nintendo officially introduced the long-awaited Switch 2. The first true next-gen follow-up to the original Switch includes backwards-compatibility for owners of existing Switch hardware and we'll learn more about the console in a Nintendo Direct presentation on April 2, 2025. There's still no firm release date, though.Nintendo is also planning to host first-look experience events in cities around the globe starting in April, the first of which take place in New York City and Paris from April 4 to April 6. More cities around North American, Europe, Oceania and Asia will follow.This trailer and accompanying press release are truly light on details. We see how the Switch 2 evolves from the original, with a larger screen and accompanying Joy-Con controllers that do appear to be attachable via magnets and a tiny port on the side of the controller. That's it, though no price, specs or any details on what games are coming to the Switch 2.That said, we did see a few shots of a Mario Kart game running on the Switch 2 so all the rumors surrounding a Mario Kart 9 launching alongside the Switch 2 got another shot in the arm today.The announcement of the Switch 2 has been a long time coming. Today's news caps off months of speculation about when the company would unveil new hardware. The community interest in a Switch 2 was vocal enough that president Shuntaro Furukawa posted on X ahead of the June 2024 Nintendo Direct not to expect any new console news, although he did confirm that the Switch's successor would be introduced by March 2025.Nintendo has given players some minor upgrades over the years since the Switch first arrived on the scene in 2017. The Switch Lite offered a more compact handheld ideal for gaming on the go, and the Switch OLED delivered a premium screen. But even within the limitations of a portable gaming device, the Switch has lagged far behind other consoles when it comes to power and performance. Of course, trying to compete with Sony and Microsoft's consoles on pure power hasn't been Nintendo's concern for decades at this point. Once we get our hands on the Switch 2, we'll know whether it delivers enough oomph to feel worth the wait.This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/nintendo-switch-2-arrives-on-april-2-2025-131325195.html?src=rss
    0 Comments ·0 Shares ·79 Views
  • ExpressVPN upgrades to post-quantum encryption NIST standards
    www.techradar.com
    Yet again, the VPN provider consolidates its commitment to future-proofing user data against the threats posed by quantum computing. Here's all you need to know.
    0 Comments ·0 Shares ·88 Views
  • UK Robinhood rival Freetrade snapped up by trading firm at 29% valuation discount
    www.cnbc.com
    U.K. Robinhood rival Freetrade has been acquired by IG Group for 160 million a 29% discount to its last valuation.
    0 Comments ·0 Shares ·84 Views
  • A mix of old and new
    beforesandafters.com
    The stop motion animation and VFX tech used on Wallace & Gromit: Vengeance Most Fowl. An excerpt from issue #25 of befores & afters print magazine.In A Grand Day Out, released in 1989, director Nick Park introduced us to Wallace and Gromit. The beloved human and dog characters were animated in clay. Decades later, and on the Aardman film Wallace & Gromit: Vengeance Most Fowl (which Park directed with Merlin Crossingham), the characters are still brought to life in clay.However, several other aspects of the film now relied on some of the latest 3D printing, camera and lighting, and visual effects technologies. Here, Aardman supervising animator and stop motion lead Will Becher shares with befores & afters the range of old and new tech used to make the film.b&a: The way you made this film seems to incorporate so many old-school pieces of animation technology, and the latest tech, too.Will Becher: Yeah, I mean, it is a theme in the film as wellthe old-school and new-school tech. What weve found on every project is theres always another version. So were always getting the newest cameras and were always updating the software that we use. But in terms of model making and the art department, 3D printings become a really big part of it because its fantastic for sort of micro-engineering and testing things.In the film, we have the Norbot gnome character. We still start with a clay sculpt and then we can scan that sculpt in and we can build the internal mechanism design to the millimeter on computer using a 3D model. Engineering elements that sit together in a very small space takes a long time and lots of filing and fiddling. With Norbot, we had a 3D printed head, and we 3D printed the mechanics inside the head. So the mouth and the way it moves, its all 3D printed, it slots together.In terms of animation, the process is very similar to how it was when Nick Park started. Were still using the process of physically animating and moving characters frame by frame. The advancements really come with the world around them. So, making the film feel bigger using set extensions or digital matte paintings for the skies.b&a: Nortbot was 3D printed, in part, but did you consider animating him with replacement animation?Will Becher: No, the reason we used 3D printing was to make sure we could make him as something solid. We could have used it for replacement animation. In fact, we thought about it, When he walks, when he marches, would it be better to print? But actually, funnily enough, because everything is so organic in the world, the floor of the sets, its not perfectly flat. So as soon as you have anything like that, you actually need articulation. So Norbot is printed in his head, but the rest of him, although some of the internal mechanisms are printed, he has a skeleton inside and he has silicon dressing on top. And all the animators then, they manipulate him by hand, they move him around.Hes just a good example of a very small version of a very well articulated puppet. So he could do a lot more than you see in the film. Hes got the most complicated sort of skeleton, really, because he has to be quite versatile. And the one puppet we make has to work for every shot in the film.b&a: Theres an army of Norbots that appear. How did you approach doing so many?Will Becher: We have PPMs for every sequence, and we spent a bit of time talking about, Okay, how are we going to do it? Were going to shoot separate plates for each one, but weve got to get them to look the same. And as soon as we said we want them to be exactly the same, the way they move, because theyre an army Thats the other thing, stop motion is organic. You cant repeat it because you are physically moving things in space, and the lens and the lighting, everything is organic.So it was our visual effects supervisor Howard Jones, he said, Okay, if you wanted to repeat, then maybe what we could do is actually we could shoot the Norbots, just one row, and then we can have the camera move back into different positions to effectively give us the perspective so that we could then paste that behind.So we tried this out. It was like, Can we do that? Can we do an individual frame and then shoot several plates with the camera in different places? We couldnt because actually the characters just look wrong because the lighting doesnt change. So then we had to design this rig that basically would move them, slide them back, take a frame, slide them back, really complex, and then stitch them together in post.But what I love is that we could have tried to build CG models, but actually within our scope, within the budget, we didnt have any CG characters. We couldnt, and it wouldve been very expensive to actually make a CG Norbot that would hold up on screen that close. So everything we shot with the Norbots, we shot for real with the actual puppets.b&a: Feathers McGraw, the penguin, returns in this film. He seems like a very simple puppet build, but is that the case?Will Becher: Well, the actual shape of the face looks very simple. Feathers is literally like a bowling pin on legs. Thats how we have described him in the early days. And the original puppet, actually, it was the same size, same height, he looked the same. He probably just didnt look quite as advanced inside him. Thats the bit now we would 3D print. For the surface and the wings, its all just clay. And even wire, we still use wire because actually its really hard to get miniature articulated joints inside.Whats also new is the use of silicone. We used to use a lot more foam latex but foam over time just dries out, cracks. For Wallace and for Gromit, the bodies of them are actually silicone. Theyre full of fingerprints, but its a very flexible type of silicone, and it just saves us time focusing on things like the performance rather than focusing on cleaning up a joint. And thats the benefit of the newer technology.b&a: When youre building sets, what kinds of decisions do you make about how much can be built and what can be DMP? Theres a canal sequence in the film, for example, which seems like a massive build.Will Becher: Thats a really key example of the fusion of tech because the art director, Matt Perry, hes excellent. Hes really resourceful at building stuff on set, in person for real. But also he really wants it to feel big and advanced. So, what wed do is figure out how build it in sections. Hell build a section and say, Okay, Nick and Merlin, I think we need to build this much of it, and the rest of it well scan and well create as DMP.That means, theres a section of the canal for real, the actual boats, which are also real physical things they can fit in. And then its extended out. To do the whole thing in-camera, we would have needed a massive space and a huge amount of time as well to paint all those bricks. I think we ended up with two sections, two actual archways, and from that, we can shoot loads of plates and they can scan it.b&a: Have other technologies for shooting changed or come into play much in recent years, say with motion control or camera rigs?Will Becher: Well, theres a shot in the film where the camera goes up the staircase. Its funny the things you dont necessarily anticipate that are going to be a pain. None of our camerasour digi stills camerascould possibly get close enough because theyre too big. Theyre all high-end digital stills cameras. And so we had to test and mount lots of different smaller digital cameras on the end of a crane and try and get as close to the set as possible because we really wanted to create that camera move for real, traveling up the staircase.So Id definitely say the cameras have changed, but also the lights have become smaller and smaller. And with the lighting, were quite often putting tiny, practical lights in there for a candle flame or something. Theyre really advanced so we can program them to flicker and theyre so small we can hide them behind props. Theyre lower temperature as well. So for the animators, it used to be quite hot work. If you were in a unit with a couple of massive 5K or 10K lights, compared to today with LEDs, well, its a huge change.issue #25 animation conversationsb&a: I guess some of the other new technology involves using visual effects and CG for things like water. But still, I imagine to keep that Aardman and animated look, how do you ensure in your role that thats maintained?Will Becher: Theres a scene at the beginning where Gromit gets milk poured all over him. We tried all sorts of things, but when you get into particle effects, it gets really difficult. And so things like mist, fog, smoke, fire, milkit turns out we can never stop it looking the scale it is. So we tried milk and it just looks too thick. We tried lots of different materials. So in the end, the milk is a bit of a hybrid. We have it pouring out of the jug. We might use actual modeling clay. But then as soon as it hits Gromit, in this case, it turns into a CG effect.For water, we did a couple of things. Firstly we found this amazing stuff, this clear resin that you can sculpt. You sculpt this resin and you basically cure it with UV light, it goes hard. So its totally see through. So this is a new thing. Weve only been using it for a few years. Its fantastic. But you cant ever create a lake or an ocean that interacts with the characters. So theres a whole scene in this film where Wallace is in the water. And for that, we actually applied what looked like water to the puppet so that he was wet above water, but all the actual surface of the water is CG.Our directors, Nick and Merlin, neither of them are scared of using CG. They use it all the time, but theyll use it where it makes the film better, not for the sake of it. Also, we wont do stuff in camera for the sake of it. If it doesnt look good, well go to the best tools for the job.Read the full issue of the magazine.The post A mix of old and new appeared first on befores & afters.
    0 Comments ·0 Shares ·98 Views
  • 0 Comments ·0 Shares ·132 Views
  • Feed is deleted
    fetchrss.com
    This Feed is deleted because you didn't use it for long time. Please consider purchasing one of our Paid plans to prevent this in the future(Feed generated with FetchRSS)
    0 Comments ·0 Shares ·156 Views