• 0 Commentarios 0 Acciones 231 Views
  • GAMERANT.COM
    The Best Weapons in Lost Castle 2
    Despite being a charming 2D indie game, Lost Castle 2 can be a brutal roguelike. To survive its harsh environments, you need to equip the best weapons. The problem is that the game offers over 200 weapon options, and you cant choose which specific weapon you'll receive during each adventuredrops are RNG-based. This leads many players to wonder, Which weapons in Lost Castle 2 are the best? They want to know which weapon categories are worth the time and effort.
    0 Commentarios 0 Acciones 140 Views
  • GAMERANT.COM
    Forgettable Villains In Naruto
    As one of the best anime of the 2000s, Naruto continues to be praised by anime and pop culture fans around the world, having a plethora of movies and series. This is mostly due to the franchise's ability to create incredible characters that audiences relate to but also deeply care about. In particular, the Naruto anime has featured some incredible villains that not only aid in character development but also have their own evolution.
    0 Commentarios 0 Acciones 148 Views
  • WWW.DEZEEN.COM
    Isern Serra completes "deeply human" interior for his own Barcelona studio
    Spanish designer Isern Serra has refurbished his Barcelona studio, adding curved walls, arched openings and shelves filled with inspirational objects that give the space a homely feel.The design for the studio aims to make employees feel "truly at ease, comfortable and free to create" by generating an atmosphere that promotes inspiration and reflection, Serra explained.Isern Serra has refurbished his Barcelona studio"The concept of our Estudio-Casa reflects our vision for a work environment that is deeply human, warm and inviting," Serra said."Our intention was to create a place that not only fulfils the functional needs of a studio but also speaks to who we are and what makes us feel at home."The design of the office is informed by domestic living areasThe studio, which is situated on the fourth floor of a 1960s industrial building in the Poblenou district, eschews the traditional rigid office layout in favour of spaces informed by domestic living areas.The interior is separated into two main zones by a change in floor level. Private areas and amenities are situated on a raised platform close to the entrance, with the workspace positioned close to full-height windows at the far end of the space.Arched openings provide access to the meeting room and a tunnel-like corridorArched openings on either side of the entrance provide access to a meeting room on one side and a tunnel-like corridor opposite that contains the bathroom and storage area.The meeting room features a cream-coloured lacquered table with a single cylindrical support made by carpenter Fusteria Vidal. A circular opening that looks out onto the workspace creates a feature above the adjacent sofa area.The corridor across from the meeting room features an arched opening cut into the tunnel-like wall that provides access to the bathroom. An alabaster lamp by design studio Siete Formas illuminates the space, which culminates in a storage room hidden behind a curtain.A circular opening in the meeting room looks out onto the workspaceBeyond the meeting room, a curved wall merges with a platform that forms the base for an L-shaped sofa. The raised concrete surface also accommodates a Kentia palm that adds greenery to the lounge area."The living space has a very calm and monochromatic base, making it a comfortable place and refuge in a workplace, where the objects that adorn it give it an artistic and contemporary twist," said Serra.Read: Perron squeezes all its "colourful operations" under one roof in three-storey Qubec buildingThe platform also accommodates the kitchen and dining area, organised around a rounded table with cylindrical legs made by Fusteria Vidal.The kitchen units are fabricated from dark walnut wood and topped with a Caliza stone worktop with a roughly hewn edge. A Moon lamp from Italian brand Davide Groppi provides a focal point above the dining table.A corner of the wall at the far end of the kitchen incorporates a small curved opening that glows when the light is turned on in the material library behind this wall.An L-shaped sofa sits on a platform that merges with a curved wallA single step down separates the living area from the main workspace, housing a pair of sculptural concrete tables that are each five meters long. Their tops are cantilevered from sturdy cylindrical legs, giving the impression of floating surfaces."The design of the table was very important to the project as whole as they are main pieces that hold the space together," Serra added."We wanted to give the sense of lightness and oneiric design through a material that is usually very heavy. As they are the same microcement as the rest of the space, they perfectly float at the end of the space."The kitchen features dark walnut wood units anda rounded tableFloating shelves positioned on the walls behind each of the tables are used to display books, materials, lamps and other objects.In addition to functioning as a workplace, Estudio-Casa will host a range of public events including talks and exhibitions. The interior therefore features a revolving array of artworks and design objects that add personality to the spaces.The main workspace consists of sculptural concrete tables and floating shelvesIsern Serra founded his eponymous studio in 2008. His approach combines pared-back contemporary forms with a playful Mediterranean attitude, resulting in spaces that often display a dreamlike quality.The studio's previous work includes a concept store with a monochromatic pink interior based on a computer rendering, and a minimalist office for a digital artist that won small workplace interior of the year at Dezeen Awards 2023.The photography is by Salva Lpez.The post Isern Serra completes "deeply human" interior for his own Barcelona studio appeared first on Dezeen.
    0 Commentarios 0 Acciones 153 Views
  • WWW.MARKTECHPOST.COM
    HAC++: Revolutionizing 3D Gaussian Splatting Through Advanced Compression Techniques
    Novel view synthesis has witnessed significant advancements recently, with Neural Radiance Fields (NeRF) pioneering 3D representation techniques through neural rendering. While NeRF introduced innovative methods for reconstructing scenes by accumulating RGB values along sampling rays using multilayer perceptrons (MLPs), it encountered substantial computational challenges. The extensive ray point sampling and large neural network volumes created critical bottlenecks that impacted training and rendering performance. Moreover, the computational complexity of generating photorealistic views from limited input images continued to pose significant technical obstacles, demanding more efficient and computationally lightweight approaches to 3D scene reconstruction and rendering.Existing research attempts to address novel view synthesis challenges have focused on two main approaches for neural rendering compression. First, Neural Radiance Field (NeRF) compression techniques have evolved through explicit grid-based representations and parameter reduction strategies. These methods include Instant-NGP, TensoRF, K-planes, and DVGO, which attempted to improve rendering efficiency by adopting explicit representations. Compression techniques broadly categorized into value-based and structural-relation-based approaches emerged to tackle computational limitations. Value-based methods such as pruning, codebooks, quantization, and entropy constraints aimed to reduce parameter count and streamline model architecture.Researchers from Monash University and Shanghai Jiao Tong University have proposed HAC++, an innovative compression framework for 3D Gaussian Splatting (3DGS). The proposed method utilizes the relationships between unorganized anchors and a structured hash grid, utilizing mutual information for context modeling. By capturing intra-anchor contextual relationships and introducing an adaptive quantization module, HAC++ aims to significantly reduce the storage requirements of 3D Gaussian representations while maintaining high-fidelity rendering capabilities. It also represents a significant advancement in addressing the computational and storage challenges inherent in current novel view synthesis techniques.The HAC++ architecture is built upon the Scaffold-GS framework and comprises three key components: Hash-grid Assisted Context (HAC), Intra-Anchor Context, and Adaptive Offset Masking. The Hash-grid Assisted Context module introduces a structured compact hash grid that can be queried at any anchor location to obtain an interpolated hash feature. The intra-anchor context model addresses internal anchor redundancies, providing auxiliary information to enhance prediction accuracy. The Adaptive Offset Masking module prunes redundant Gaussians and anchors by integrating the masking process directly into rate calculations. The architecture combines these components to achieve comprehensive, and efficient compression of 3D Gaussian Splatting representations.The experimental results demonstrate HAC++s remarkable performance in 3D Gaussian Splatting compression. It achieves unprecedented size reductions, outperforming 100 times compared to vanilla 3DGS across multiple datasets while maintaining and improving image fidelity. Compared to the base Scaffold-GS model, HAC++ delivers over 20 times size reduction with enhanced performance metrics. While alternative approaches like SOG and ContextGS introduced context models, HAC++ outperforms them through more complex context modeling and adaptive masking strategies. Moreover, its bitstream contains carefully encoded components, with anchor attributes being entropy-encoded using Arithmetic Encoding, representing the primary storage component.In this paper, researchers introduced HAC++, a novel approach to address the critical challenge of storage requirements in 3D Gaussian Splatting representations. By exploring the relationship between unorganized, sparse Gaussians and structured hash grids, HAC++ introduces an innovative compression methodology that uses mutual information to achieve state-of-the-art compression performance. Extensive experimental validation highlights the effectiveness of this method, enabling the deployment of 3D Gaussian Splatting in large-scale scene representations. While acknowledging limitations such as increased training time and indirect anchor relationship modeling, the research opens promising avenues for future investigations in computational efficiency and compression techniques for neural rendering technologies.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our70k+ ML SubReddit. Sajjad Ansari+ postsSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner. Meet 'Height':The only autonomous project management tool (Sponsored)
    0 Commentarios 0 Acciones 138 Views
  • WWW.CNET.COM
    Best Cheap Vacuums for 2025
    Our Picks Best overall cheap cordless vacuum Tineco A11 View details $250 at Amazon View details Best overall cheap robot vacuum Eufy RoboVac 25C View details $149 at Walmart View details Best cheap handheld vacuum Dirt Devil Grab & Go Plus View details $38 at Amazon View details Best robot vacuum under $100 Ionvac SmartClean 2000 View details $90 at Walmart View details Best cordless vacuum under $100 Lubluelu 009 [Out of stock] View details See at Walmart View details Table of Contents What's the best cheap vacuum?Getting a good vacuum at a great value is possible in 2025 with so many options on the market. There are even cordless vacuums that clean very efficiently and have a decent price tag. We've rounded up a variety of affordable vacuum types so you can find the right one for your home.A good vacuum makes keeping a clean home easier. A vacuum can be a large purchase, but with all of the options available it doesn't have to be an expensive one. You can get a great cordless vacuum at an affordable price that will be able to handle dog hair and steps. We've found many types of vacuums to fit your home's needs.CNET's top pick among the best cheap vacuums is the Tineco A11 stick vac. This high-performance cordless vacuum is the cheapest model on our list of best vacuums, so it's only natural for it to appear atop our list of budget-friendly vacuum cleaners.You don't have to limit yourself to stick vacs. If you prefer something a bit more hands-off, then you'll want one of the best robot vacs for cheap. In that category, our top pick is the the Eufy RoboVac 25C, which goes down to as low as $150 when on sale. This helpful robot is another one that makes its way onto our overall best vacuum list and impressed us with consistent and thorough cleaning across many floor types. But we have even more picks for cheap vacs for every type of budget, and we'll continue to update this list as new contenders pass through our testing facility.Best cheap vacuum cleaners for 2025 Photo Gallery 1/1 $250 at Amazon Pros Great price for a reliable vacuum Works well on carpet and hardwood Handles pet hair Cons Short battery life Release valve can be stubborn Battery life/runtime 35 minutesWeight 5.62 poundsBin capacity 0.6 litersAnti-allergy filter (HEPA) YesType Cordless stickPrice $289 8.3 $250 at Amazon As a top performer in our current cordless vacuum test group, the Tineco A11 stick vacuum represents an outstanding deal. In fact, we think so highly of this cordless vacuum that it also appears on our full list of best vacuum cleaners.When it comes to cleaning up pet hair, barely a trace of the material remained after the machine vacuumed mid-pile carpeting and hardwood floors. Some strands were left visible when traveling across our low-pile test carpet. Mid-pile carpeting usually causes more problems for vacuums, but the Tineco's A11s solid battery life delivers up to 35 minutes of uninterrupted run time.The A11 series design looks nice and is functional, with a dustbin that's easy to completely empty. There's also a handy trigger lock lever to keep the vacuum running without constant finger pressure. The Tineco A11 ships with some helpful accessories, including a power brush, a mini power brush, a two-in-one dusting brush, and a crevice tool for versatile cleaning. CNET Score Breakdown 8.3 /10 SCORE Performance 9.1 Features 7 $149 at Walmart While "you get what you pay for" is frequently true, with the Anker Eufy RoboVac 25C, the low price doesnt tell the entire story. In our testing of this device, we found it performs very well, putting up scores that arent too far off from more expensive models. For instance, its ability to scour sand from hardwood floors (78.9%) wasn't too far below that of the $460 Roborock S8. On low- and mid-pile carpets as well, the RoboVac 25C managed to suck up averages of 54% and 52% of sand from them, respectively. What is the current asking price? Just $149 at Walmart.
    0 Commentarios 0 Acciones 165 Views
  • WWW.CNET.COM
    Today's NYT Mini Crossword Answers for Monday, Jan. 27
    Looking forthe most recentMini Crossword answer?Click here for today's Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.Sometimes I'm too practical for theNYT Mini Crossword. 6-Across asked for a "cellphone pop-up," and I kept thinking of those PopSockets that people stick on the back of their phones, whether to give themselves a better grip, or a way to prop the phone up for shooting videos. The answer was ... not that kind of a pop-up. Need some more help with today's Mini Crossword? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.The Mini Crossword is just one of many games in the Times' games collection. If you're looking for today's Wordle, Connections, Connections: Sports Edition and Strands answers, you can visitCNET's NYT puzzle hints page.Read more: Tips and Tricks for Solving The New York Times Mini CrosswordLet's get at those Mini Crossword clues and answers. The completed NYT Mini Crossword puzle for Jan. 27, 2025. NYT/Screenshot by CNETMini across clues and answers1A clue: Gold medalist LedeckyAnswer: KATIE6A clue: Cellphone pop-upAnswer: ALERT7A clue: What gamblers and actors studyAnswer: LINES8A clue: Wood traditionally used for black piano keysAnswer: EBONY9A clue: AngerAnswer: IREMini down clues and answers1D clue: Salad green derived from wild mustardAnswer: KALE2D clue: Cover story told in courtAnswer: ALIBI3D clue: Choir memberAnswer: TENOR4D clue: __ Adler, character who outsmarted Sherlock HolmesAnswer: IRENE5D clue: Site that dubs itself "the global marketplace for unique and creative goods"Answer: ETSYHow to play more Mini CrosswordsThe New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day's Mini Crossword for free, but you'll need a subscription to the Times Games section to play older puzzles from the archives.
    0 Commentarios 0 Acciones 156 Views
  • WWW.FORBES.COM
    Heres How Big LLMs Teach Smaller AI Models Via Leveraging Knowledge Distillation
    Expanding trend of using AI-driven knowledge distillation to have LLMs boost the capabilities of ... [+] SLMs.gettyIn todays column, I examine the rising tendency of employing big-sized generative AI and large language models (LLMs) to sharpen smaller-sized AI or SLMs (small language models). It is happening with increasing regularity. A new trend is underway.This makes indubitable sense. Heres why. Larger AI models readily contain a broad level of knowledge-related aspects due to having the digital memory space available to handle it. Smaller AI models are usually tight on space and not as widely data-trained. If there are elements that we want a smaller AI model to have, and the larger models contain it, a kind of transference can be undertaken, formally known as knowledge distillation since you distill or pour from the bigger into the smaller AI.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).Big Versus Small In AI ModelsThe AI world currently has two distinct pathways regarding generative AI, there are large language models (LLMs) and there are small language models (SLMs).If you use any of the major generative AI apps such as ChatGPT, GPT-4o, o1, o3, Claude, Llama, Gemini, etc., those are considered large language models due to being massively scaled. Tons and tons of online content was scanned to make those LLMs. The pattern-matching was immense and has been stored in a large-scale internal data structure within the AI.MORE FOR YOUOne issue with LLMs is that they chew up a lot of digital memory space and they consume a slew of computing cycles to perform their processing. Thats why they are conventionally accessed in the cloud they rely on expensive high-end computer servers and immense disk drives accordingly. The unfortunate downside is that you need an online connection to access them, and the usage can be costly if you are paying a fee for your various prompts and responses.A means of solving this situation entails making use of small language models (see my discussion at the link here). They are often small enough that you can run them directly on your smartphone or laptop. No need to depend on an Internet connection. The cost of processing drops to near zero (assuming that the vendor doesnt opt to apply some other hidden billing or charges).The rub is this.SLMs are not usually nearly as capable as LLMs.You cant expect something that holds only five pounds of gold nuggets to be as extensive as one that holds a hundred pounds of gold nuggets. An AI maker that establishes an SLM will typically aim to focus the SLM on some specific topic or domain. The SLM is considered narrow concerning the range of said-to-be knowledge that it holds. For more about how SLMs are able to squeeze a lot into a smaller framework, see my coverage at the link here.Helping SLMs Via LLMsSuppose that there is something in an LLM that we would greatly wish to have in an SLM.For example, assume for the sake of discussion that we have in hand an SLM that lacks any semblance of info about what the stock market is all about. That wasnt a topic covered during the initial data training of the SLM. In contrast, seemingly all LLMs would have encountered data about the nature of the stock market when the initial data training was scanning widely across the whole Internet. We would expect that an LLM would have plenty of data about stock market purposes and uses.How can we leverage the LLM to aid the SLM in getting up-to-speed about the nature of the stock market?A person who is a techie would likely right away think about doing some kind of internal data transfer from the guts of the LLM to the innards of the SLM. Maybe use an online tool to go ahead and copy or extract the stock market portions of the LLM. Then, use a similar tool to insert or copy into the SLM that particular extraction. Voila, problem solved.Well, yes, thats a potential means of doing a transfer from one model to another model. There are difficulties afoot. The odds are that the internal structures of the LLM are different from those of the SLM. In that sense, the conversion of the extraction to become something amenable to an insertion into the SLM can be problematic. Im not saying it isnt solvable, only that it can be a bear to deal with (costly, troublesome, and so on).Maybe theres another way to handle the transference.Yes, indeed, we can do something much easier, namely lean into the fact that generative AI is based on the entering of prompts and the generating of responses. That is how they are natively designed to work. We could perhaps prompt our way to get the data from an LLM and prompt our way to provide that into the SLM.Prompts are wonderful as you will soon see.Example Of LLM To SLM TransferenceLets log into two AI systems, one is an LLM, and the other is an SLM.We will proceed to use the LLM to essentially train or teach the SLM about the stock market. The LLM and SLM are to be directly connected to each other. They carry on a conversation. If youve ever used generative AI, youve undoubtedly engaged in a conversation whereby you enter a prompt, the AI responds, you enter another prompt, the AI responds, and so on. LLM and SLM will do the same, though just with each other and there isnt a human in the loop.Consider the LLM to be a teacher in this scenario, and the SLM to be a student.Here we go.LLM (Teacher AI): This is an overview introduction to the stock market. The stock market is a complex system where shares of publicly traded companies are bought and sold. Do you know what a stock is?SLM (Student AI): No. What is a stock?LLM (Teacher AI): A stock represents ownership in a company. Owning a share means owning a piece of that companys equity.SLM (Student AI): Got it. Do prices of stock shares fluctuate or do they remain fixed in value?LLM (Teacher AI): Prices of shares fluctuate based on supply and demand, influenced by factors like company performance, market trends, and investor sentiment.Take a close look at the snippet of the AI-to-AI conversation that Ive provided above.One crucial aspect is that this data transfer is unlike a conventional form of data transfer. In a techie-oriented data transfer, you would identify a set of data to extract and figure out a place in the targeted SLM to plop that data. None of that is happening in this alternative approach. Instead, the extraction is varied, taking place by conversation, and the insertion is varied, occurring based on the dialogue underway rather than a straight-ahead data transfer.Pretty nifty.AI Teacher And The AI StudentThe beauty is that the data training is working at a higher level of consideration and not at the bits and bytes level. The LLM and SLM could carry on with this conversation and extensively have the LLM tell as much as it can about the stock market to the SLM.If that seems like a slow way to do things since conversations are a laborious task, remember that this dialogue is happening between two AI systems. The speed of this dialogue could be far faster than any dialogue of a human-AI caliber. The pace would be only limited by how fast the two AIs can process prompts and responses.In the AI field, this process of having two AI models undertake a data transference is referred to as knowledge distillation. We are distilling data, or some would say knowledge, from one AI model to another.Does the direction always have to be from an LLM to an SLM?Nope.SLM As Teacher And LLM As StudentSometimes we might want the SLM to be the teacher and the LLM to be the student. The circumstance usually goes like this. An SLM has been data trained on some niche or been given extensive data training on a narrow topic that isnt fully covered in an LLM. Suppose we want the LLM to also have the same niche or depth as found in the SLM.Easy-peasy, just do the same as mentioned previously but put the SLM in the drivers seat.Heres an example.SLM (Teacher AI): Do you know what synthetic aperture radar (SAR) is?LLM (Student AI): I only know of the acronym and not much more. Tell me about SAR.SLM (Teacher AI): Its a type of radar used to create detailed images, often from satellites or aircraft, by using the movement of the radar platform to simulate a larger antenna. SAR processes signals to achieve high-resolution imaging, regardless of weather or lighting conditions.LLM (Student AI): Got it. How does SAR achieve detailed resolution?SLM (Teacher AI): By combining multiple radar signals over time as the platform moves, it synthesizes a larger virtual antenna for higher resolution.In this instance, the LLM knew of the topic but didnt have any further details. SLM was able to share with the LLM aspects that now will be also contained in the LLM.Permutations And CombinationsI trust that you already envision these four possibilities:(1) LLM-to-SLM. We do a distillation from a large language model to a small language model.(2) SLM-to-LLM. We do a distillation from a small language model to a large language model.(3) LLM-to-LLM. We do distillation from a large language model to another large language model.(4) SLM-to-SLM. We do distillation from a small language model to another small language model.You saw those first two possibilities of LLM-to-SLM and SLM-to-LLM in the examples noted above.There are times when you might want an LLM-to-LLM distillation. Why might that be? Please be aware that the LLMs of different AI makers are data-trained on different portions of the Internet. Sure, there is a huge amount of overlap about what data they each scanned, but there are still some differences. It could be that one LLM covered aspects in its scan that another LLM would find handy to have.The same can be said about the SLM-to-SLM distillation. There might be occasions wherein one SLM has something that we want to be shared with a different SLM.Things Can Go WrongMany aspects can go wrong with this prompt-based approach to distillation.Suppose that the model doing the teaching does a poor job of deciding what to share with the other model. In my example about the stock market, the LLM doing the teaching might fail to cover all the things that the SLM ought to be trained in. The dialogue could meander. It might omit important points.If you compare this form of data transfer to a strict data-to-data internal transfer of a relatively bound and precise nature, a lot of open-ended issues can readily arise.The student model could also mess up. It might fail to ask suitable questions. It might falter when ingesting the info being provided by the teacher model. The interpretation of what the teacher model indicates could be miscast when then stored in the student model.Not wanting to seem like a Gloomy Gus, both the teacher model and the student model could each falter, doing so at various junctures of the distillation process. One moment, the teacher model goofs. The next moment, the student model goofs. Yikes, it could be somewhat akin to those old Abbott and Costello skits such as the classic Whos On First?Astute AI developers who use prompt-oriented distillation are aware of those challenges and can take various precautions to cope with them.Research On AI Knowledge DistillationIf you are intrigued by the emergence of AI knowledge distillation, there is a sizable amount of research on the evolving topic.A handy place to start would be to read this recent survey and framework paper entitled A Survey on Knowledge Distillation of Large Language Models by Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, and Tianyi Zhou, arXiv, October 21, 2024, which made these salient points (excerpts):The concept of knowledge distillation in the field of AI and deep learning (DL) refers to the process of transferring knowledge from a large, complex model (teacher) to a smaller, more efficient model (student).The escalating need for a comprehensive survey on the knowledge distillation of LLMs stems from the rapidly evolving landscape of AI and the increasing complexity of these models.The key to this modern approach lies in heuristic and carefully designed prompts, which are used to elicit specific knowledge or capabilities from the LLMs. These prompts are crafted to tap into the LLMs understanding and capabilities in various domains, ranging from natural language understanding to more complex cognitive tasks like reasoning and problem-solving.The use of prompts as a means of knowledge elicitation offers a more flexible and dynamic approach to distillation. It allows for a more targeted extraction of knowledge, focusing on specific skills or domains of interest. This method is particularly effective in harnessing the emergent abilities of LLMs, where the models exhibit capabilities beyond their explicit training objectives.The study does a helpful job of identifying the range of AI research in this discipline and provides plentiful references and citations for you to then dig further into the weighty matter.Where Things Are HeadedAs the number of generative AI models proliferates, the odds are that we will want to do more and more distillation among the models. Each model will inevitably be missing something that another model contains, and we will see great value in sharing across AI models. This raises some hefty AI ethics and AI legal questions, see my discussion at the link here, and doing distillation across variously owned models could be a legal quagmire.Another twist is that there are opportunities to advance in this realm via aspects such as multi-teacher distillation and/or multi-student distillation. The upshot is this. We might have several LLMs that we want to all at once do distillation simultaneously into say an SLM. Likewise, we might have a bunch of SLMs that we want to all at once be data trained at the same moment in time.Exciting prospects.I will give you a final thought for now that you are welcome to mindfully ponder. Albert Einstein famously said this: It is the supreme art of the teacher to awaken joy in creative expression and knowledge.When a human teaches another human, the usual aim is more than just a knowledge transfer going on. The hope is that the teacher will inspire the student. The student will see the grand value in whatever is being taught. Should we expect the same when doing an AI-to-AI teacher-student endeavor?Hogwash, some might exhort. AI is not sentient. It doesnt need nor care about being inspired. This is entirely and solely about moving data from one pile to another. Period, end of story.That might be the case at this time, but if we attain artificial general intelligence (AGI), or possibly even artificial superintelligence (ASI), will that still be the same? Compelling arguments suggest that these softer human-oriented facets might become more pronounced.Well, there you go, Ive sought to distill this topic sufficiently for you may your learning in life be both highly informative and breathtakingly inspiring.
    0 Commentarios 0 Acciones 154 Views
  • WWW.FORBES.COM
    Year Of The Snake: Can Your Business Shed Its Skin Fast Enough For AI?
    Chinese New Year of SnakegettyAs 2025 unfolds, we find ourselves in the Year of the Snakea symbol of adaptability and resilience. These qualities, often symbolized by the snake's ability to shed its skin and start anew, will be key to navigating the challenges and seizing the opportunities that lie ahead. Born under this sign, I've always been intrigued by the dual nature of the snakeboth feared and admired. Embracing this spirit of transformation will be crucial as we navigate the complexities of the future.2024 was a significant year for artificial intelligence (AI). Groundbreaking generative models exploded onto the scene, sparking global conversations about its transformative potential. These developments reshaped the tech world creating exciting new opportunities for both businesses and individuals alike to thrive in this dynamic new world.According to Statista, the AI market grew explosively, hitting over $184 billion in 2024a jump of nearly $50 billion from the previous year. And there's more growth ahead, with projections suggesting the market could surpass $826 billion by 2030.So, throughout 2025, it's time to move from merely exploring AI to making it work for us. The excitement of last year laid the groundwork, and now its about turning AIs potential into real results. Just like a snake sheds its skin to start anew, for our businesses to flourish, weve to leave old habits behind and embrace AI-driven solutions.The Snake's Wisdom: A Masterclass in AdaptationThriving across diverse environmentsfrom arid deserts to lush rainforeststhe snake embodies adaptability. This parallel resonates deeply in todays dynamic market. Businesses must embrace new technologies and strategies, like AI-driven solutions, to meet evolving customer needs and outperform competitors. Its not just about survival; its about thriving in diverse scenarios.Take BMW as an example, which initially relied on traditional fixed maintenance schedules. This approach often led to unexpected downtime and wasted resources. The introduction of AI changed the game by enabling predictive maintenance through sensors and data analysis. This shift has kept BMWs production lines running more smoothly, reducing unexpected stoppages, and even supporting sustainability goals. Combining these efforts, their AI system helps prevent around 500 minutes of disruption yearly in vehicle assembly.Adapting doesnt stop thereit extends to how companies manage change. My colleague Jorrit explains how AI is now crucial for understanding employees strengths and creating tailored learning experiences, moving away from the usual one-size-fits-all method. After all, people are the true drivers of change. Implementing AI in change management strategies helps to empower employees along the transformation journey.Focused Growth: Directing Progress with AIA snake's focused movement is a reminder of the importance of clear objectives and directed progress. In the context of AI, this means reassessing your goals and using the technology to accelerate your journey.This understanding is being embraced by forward-thinking companies like Delta Airlines. Recognizing the need to enhance customer satisfaction, Delta sought to integrate customer-facing perspectives into its management structure.By using AI, they successfully filled 25% of corporate and management positions with individuals from customer-facing roles. This approach not only improves the companys employee experience but also delights customers and strengthens the organizations competitive edge.Through this strategy, Delta Airlines ensures that its management team is equipped with firsthand customer insights, ultimately fostering a more customer-focused corporate environment.Year of the Snake: Winding Towards SuccessThe Year of the Snake offers a compelling metaphor for businesses: they must cultivate both the snake's agility in adapting to rapid change and its laser-like focus in executing precise strategies to thrive in the AI-powered era. From revolutionizing maintenance practices at BMW to reshaping talent acquisition at Delta Airlines, AI is not just a technological advancementit is a catalyst for achieving strategic objectives and gaining a competitive edge.The era of AI exploration is giving way to execution. Those who, like the snake, seamlessly navigate change, adapt with agility, and strike with precision will not only survive but thrive in this new technological frontier, leading innovation and achieving extraordinary outcomes in an increasingly AI-driven world.
    0 Commentarios 0 Acciones 158 Views
  • ARSTECHNICA.COM
    Jeeps first battery EV is not what we expected: the 2024 Wagoneer S
    leave it in Eco mode Jeeps first battery EV is not what we expected: the 2024 Wagoneer S Drag optimization means it's very quiet inside, but it's also quite expensive. Michael Teo Van Runkle Jan 27, 2025 12:01 am | 6 The Wagoneer S is more like an electric Cherokee than a Wrangler EV. Credit: Michael Teo Van Runkle The Wagoneer S is more like an electric Cherokee than a Wrangler EV. Credit: Michael Teo Van Runkle Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreJeep provided a night in a hotel so Ars could attend the Wagoneer S first drive. Ars does not accept paid editorial content.This year marks the return of the Jeep Wagoneer, which formerly served as a more luxurious version of the Cherokee, but now hits the market as Jeep's first full EV. The challenge? How to merge the modern electric lifestyle with the outdoorsy, rugged ethos that defines Jeep as a brand, alongside the more recent addition of the internal-combustion Grand Wagoneer SUV's enormous luxury.First of all, the new Wagoneer S wound up much smaller in person than I expected. The overall profile falls more in line with the shape of mid-size electric crossovers including the Kia EV6, Hyundai Ioniq 5, Chevrolet Equinox, and of course, Tesla's Model Y. But the interior volume belies that relatively compact exterior, with plenty of space for me at 6'1" (185 cm) to sit comfortably in both the front and rear seats. Total cargo volumes of 30.6 cubic feet (866 L) with the second row up and 61 cubic feet (1,727 L) with the second row folded flat end up mattering less than the large floor footprint, because the height used to calculate those measurements drops with the low sloping roofline and rear window.Much of the interior space can be attributed to packaging of the Wagoneer EV's battery. Rather than going for all-out kilowatt-hours in a dedicated skateboard layout, Jeep instead used the Stellantis group's STLA Large platform, in this case stuffed with a 100.5-kWh lithium ion pack built on 400 V architecture. That's enough for an EPA-estimated 303 miles of range (487 km), a solid figure but not a seriously impressive efficiency stat. In comparison, the world-beating Lucid Air Pure RWD manages about 40 percent more range per kilowatt-hour and a Polestar 3 AWD does about 18 percent worse. Claimed DC fast charge times of 23 minutes for a 20-80 percent top up, or 100 miles (160 km) in 10 minutes similarly get the job done without standing out from the pack. Credit: Jeep That modular STLA Large chassis can house either a full internal-combustion engine, a hybrid powertrain, or fully electric components. The Wagoneer S uses two matching 335 hp (250 kW) motors, front and rear, for a combined 600 hp (447 kW) and 618 lb-ft of torque (838 Nm). In typical EV fashion, the latter comes on quick and makes this undoubtedly the fastest accelerating Jeep ever, as I learned while battling horrendous headwinds in fire-ravaged Southern California (which served as something of a nonstop reminder of the importance of taking baby steps, a la Jeep's first EV, toward a more sustainable transportation future).Pushing deep into the "throttle" pedal, the Wagoneer S will happily chirp all four tires in Sport mode. And the jerk thrusting my torso and skull back into the plush seat suggests that Jeep's claimed 0-60 mph time of 3.4 seconds might just be accurate, potentially thanks to being able to do a true launch by stepping on the brake and gas pedals simultaneouslypossible because Jeep chose to retain more standard mechanical brakes rather than a brake-by-wire system as on the EV6/Ioniq siblings and Model Y.The suspension tuning definitely trends toward the typical tautness of today's crossover segment, where aspirational sporty dynamics can sometimes create harsh and uncomfortable ride quality. But I still might have ventured to call the Wagoneer S somewhat softer than most of the competition, until the roughest of roads revealed the 5,667 lb (2,570 kg) curb weight. For an EV, that figure falls roughly in the middle of the pack, but this crossover weighs about as much as a full-size internal-combustion three-row SUV. Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Still, even at highway speeds (in gale-force winds) or on those roughest of roads, the Wagoneer S remains shockingly quiet. And not just to enhance the experience of the Wagoneer S Launch Edition's 1,200 W Macintosh sound system. Instead, Jeep exterior designer Vince Galante walked me through the design process, which kicked off with a targeted 0.30 coefficient of drag despite the need to stick with a squared-off, upright SUV posture typical of Jeeps throughout history."On the exterior design portion, the aerodynamic drag is our biggest contributor," Galante told me. "It kind of comes up off the hood, up the A pillar, and tapers down towards the back, and finishes in a square, yet tapered pillar reminiscent of the original Wagoneer. But through the middle of the car, it's basically ideal for what the wind wants to do."From the front or side perspective, this Wagoneer looks almost as boxy as a 1980s Jeep. But a rear viewing angle reveals the massive rear wing creating that illusion, which sits well off the sloping line of the rear roof and glass. Credit: Michael Teo Van Runkle "Anytime we do a floating element, we think 'Yeah, there's no way engineering's gonna let us get away with this,'" Galante laughed. "We work really collaboratively with the engineers, and they were like, 'Let's test it. Let's see what it does.' And they came back and said, 'You know, yeah, this has potential. But you guys gotta make it sit off the surface three times more dramatically.'"Galante estimates the original wing design rose up two inches, while the final production version is more like nine inches off the rear window. He also pointed out a host of other less obvious details, from body panels that step in by fractions of millimeters to differently rounded radii of wheel arch edges, and especially the confluence where the A pillar connects to the body."The windshield, the A pillar, the side glass, the mirror, the post that holds the mirror, the fender, everything comes together there," he said. "I think every vehicle I've ever worked on, that was the last thing to finalize in the wind tunnel I mean, we're talking tenths of millimeters for some of the iterations that we're doing in those areas. Especially the front edge of the A pillar, I can recall trying twenty, thirty, forty different radii on there to get that just right." Credit: Michael Teo Van Runkle All the aero considerations attempt to coax air to stick to surfaces, then break off suddenly and evenly. The rear wing therefore pushes air down toward the rear window, while creating as little turbulence as possible. The final range figure criticallyand barelycracking 300 miles justified so much refinement in Jeep's new rolling road wind tunnel, thanks to a final Cd of 0.294. Maybe juggling production cost savings of the STLA Large platform dictated such extensive aerodynamic efforts more than a dedicated skateboard battery layout might have, but the resulting quietude that combating those inefficiencies produced does truly border on a luxury experience, even if we're not quite at Audi (nor Lucid) levels of silence.On the interior, Jeep also tried to lean into the Wagoneer S's sustainability, using quality materials with textural designs and as little piano-black plastic as possible. The fabrics, plastics, and aluminum trim come almost entirely from recycled sources62 percent for suede and 100 percent for fabric and carpeting, in factand you'll see zero chrome anywhere on the car, since chroming is apparently one of the most environmentally deleterious processes in all of automaking.But the Wagoneer S similarly leans into a tech-heavy user experience, with almost 55 inches of screen visible from the front seats: the gauge cluster, center infotainment, climate controls, passenger dash screen, and digital rearview mirror all contribute to that total. Climate control, especially, seems criticaland an often overlooked element for many EV manufacturers. Rather than a full panoramic glass roof, as on the Lucids and Polestars of the world, this Jeep gets a long sunroof with a retracting insulated cover to keep out heat. The excellent ventilated front and rear seats (and massaging, for the fronts!) also more efficiently cool down passengers. Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle For my taste, the digitalization of driving went a little too far. I never enjoy a rotating shift knob but this one clicks into gear with a positive heft. I also noticed some pixelation and latency in the gauge cluster's navigation maps, as if the refresh rate was too slow for the speed I was driving. Not that I started ripping up the road too much in this luxury crossover, or at least, not more often than scientific experimentation demanded (and a similar problem also affected the Dodge Charger EV we drove recently).Sport mode brought out some of my inner grinning child, but I actually preferred the Wagoneer S in Eco mode. So much power split to the front and rear wheels can create some torque steer, and throttle response that borders on touchy. The electrically assisted steering also prioritizes a heavy on-center zone, then snaps to light inputs with the slightest turn of the wheel, which made holding a steady line slightly distracting.Instead, Eco dulls down the throttle response and the steering becomes a bit less reactive. The Wagoneer S will then also more regularly disconnect the front wheels for improved efficiencythough at the hubs, rather than the axles, so some reciprocating mass still saps precious electrons. Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle It would be more efficient to disconnect the rears, but this decision also centers around maintaining some semblance of Jeep-ness. Even if the Wagoneer S aligns most nearly with recent Cherokee and Grand Cherokee models, rather than the off-roady Wrangler and Gladiator or the super-luxe Grand Wagoneer. The forthcoming Trailhawk version promises to double down on the 4x4 capability, with a locking rear differential, better tires, and hopefully better suspension than I experienced on a quick sojourn off the asphalt onto a slightly rutted gravel road east of San Diego.More importantly, cheaper trims will arrive later in 2025, also, since the Launch Edition's tall ask of $71,995 almost doubles the starting sticker of a Equinox EV, seriously eclipses either a Model Y, EV6, or Ioniq 5, and also somehow costs more than a Polestar 3 or even a Lucid Air. Jeep so far wants to keep pricing for those lower-spec Wagoneer EVs under wraps, though, even if the heart of the run will undoubtedly help the first electric Jeep more effectively escape from unfortunate comparisons to such stiff competition. 6 Comments
    0 Commentarios 0 Acciones 164 Views