• www.cgchannel.com
    Tuesday, March 4th, 2025Posted by Jim Thacker3d-io releases Unwrella-IOhtml PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Originally posted on 19 June 2024 for the beta, and updated for the final release.Tools developer 3d-io has released Unwrella-IO, a new standalone app based on the technology used in Unwrella, its UV unwrapping and packing plugin 3ds Max and Maya.It is designed to automatically unrap the UVs of 3D models in a few clicks, via an intuitive UI.Turning 3d-ios UV unwrapping and UV packing technology into standalone apps3d-io has recently been turning its technology previously available as plugins for specific DCC applications into standalone apps for users of any DCC software.In 2024, it released Packer-IO, a free standalone UV packing tool based on the technology behind its existing UV-Packer plugins.Unwrella-IO does something similar, but for one of 3d-ios main commercial tools: Unwrella, its UV unwrapping and packing plugin for 3ds Max and Maya.The standalone version makes it possible to use the same technology with assets from other DCC applications, like Blender or Unreal Engine.An intutive, high-performance standalone UV unwrapping tool3d-io describes Unwrella-IO as providing a natural UI and easy-to-understand controls for creating UVs making it possible to unwrap 3D models with just a few clicks.In the video above, the UI looks almost identical to Packer-IO not entirely surprising, since UV-Packer is integrated into Unwrella but with the addition of an additional section of the interface with UV unwrapping controls.It includes preset modes for hard-surface and organic models, and mosaic unwrapping for 3D scans, plus controls for the placement of UV seams and the amount of UV stretching to tolerate.The other workflows are similar to UV-Packer, with users able to drag and drop models into the app, and semi-automatically unwrap and pack the UVs.Price, system requirements and release dateUnwrella-IO is compatible with Windows 10+. A macOS version is coming soon. Perpetual licenses cost 199 (around $210). It is also possible to upgrade from the 3ds Max or Maya plugin to the new standalone edition for 99 (around $105).Read more about Unwrella-IO on 3d-ios websiteRead the online documentation for Unwrella-IOHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.Latest News3d-io releases Unwrella-IONew standalone app from the maker of the Unwrella plugins for 3ds Max and Maya promises easy UV unwrapping and packing.Tuesday, March 4th, 20255 key features for CG artists in Godot 4.4Discover five key new features in the open-source game engine, including updates to shading, lighting, animation and physics.Tuesday, March 4th, 2025Trimble releases SketchUp 2025.0Check out the new features in the architectural modeling app, also used in concept art, including HDRI lighting and PBR materials.Monday, March 3rd, 2025Tutorial: Dynamic Cloth Simulation for ProductionMaster character effects workflows for animation, cinematics and VFX with The Gnomon Workshop's detailed Houdini tutorial.Sunday, March 2nd, 2025Check out free Blender scattering add-on OpenScatterPromising open-source add-on lets you scatter plants and rocks in your scenes according to elevation, texture masks or guide curves.Sunday, March 2nd, 2025CETA Software launches Artist AccessCheck out the new time-tracking tools in the cloud-based production management platform for VFX. Free for six months to startup studios.Friday, February 28th, 2025More NewsFoundry releases Nuke 16.0Boris FX releases SynthEyes 2025Adobe launches Photoshop on iPhonePlastic Software releases Plasticity 2025.1Technicolor Group begins to shut down operationsDownload four free VDB clouds from VFX AssetsArtlist discontinues the FXhome apps: HitFilm and Imerge deadFree tool: Mesh Cleaner for BlenderTutorial: Introduction to Lighting & Compositing for CinematicsAdobe to raise the price of Substance 3D subscriptionsAdobe releases Substance 3D Sampler 5.0Chaos acquires architectural AI tools developer EvolveLABOlder Posts
    0 Comments ·0 Shares ·5 Views
  • Playable Worlds Stars Reach could be the next big sandbox sci-fi MMORPG
    venturebeat.com
    Stars Reach, the new sandbox sci-fi online game coming from Playable Worlds, aims to be the next big sci-fi massively multiplayer online role-playing game. Its in the middle of a Kickstarter campaign, and its going pretty well. Stars Reach hit its Kickstarter goal of $200,000 in one hour and its now crossed $499,000 with 23 dayRead More
    0 Comments ·0 Shares ·3 Views
  • Coheres first vision model Aya Vision is here with broad, multilingual understanding and open weights but theres a catch
    venturebeat.com
    Aya Vision 8B and 32B demonstrate best-in-class performance relative to their parameter size, outperforming much larger models.Read More
    0 Comments ·0 Shares ·3 Views
  • Why Astro Bot's awards sweep may matter even more than you think
    www.gamedeveloper.com
    It both was and wasn't surprising to see Team Asobi's Astro Bot scoop up another game of the year win at the 2025 DICE Awards. As Game Developer news editor Chris Kerr put it, "there is catharsis to be found in the silly and absurd," and Astro Bot delivers that catharsis in spades. It's as equal a contendor as every other game of the year award nominee from 2024, no doubt about that.That said, plenty of small, cathartic games have been up for top end-of-year prizes for many years nowbut few break past longer, more bombastic titles. You'll either see narrative behemoths like The Last of Us Part 2 sweep ceremonies, or watch developers get knocked out of the way as the ambitious design of games like Elden Ring barrel through the competition. To see the little blue-and-white bot tap-dancing past huge games like Indiana Jones and the Great Circle and Helldivers 2, while also skating by surprise 2024 hit Balatro (offering each of them a little kiss on the cheek as it passes), represents a break from the last decade's trend.Is it a fluke? Luck? A consequence of split tickets derived from first-past-the-post voting systems? Maybe! Or maybe not? At the ceremony, Team Asobi studio director Nicolas Doucet told Game Developer there might be something bigger going on behind the wins, a "deeper message" that indicates game developers and players are ready to embrace "more compact" games."Fun doesn't have to come in a large size"When we flagged down Doucet, he'd already passed by several times, accepting the studio's awards for technical, animation, and design achievement, as well as 'family game of the year.' He said he wished more of the studio could have been present in Las Vegas for the show, since the wins are a huge collective achievement for Team Asobi.But give Doucet credithe has frequently devoted time at award shows to praise not only his collaborators, but his peers in game development. At The Game Awards in December 2024, he thanked a certain developer in Kyoto for being such a huge inspiration and for "showing there's innovation and quality consistently" in platformers, acting as a huge inspiration for him and Asobi."I've managed not to mention them, have you noticed?" he asked, eyes flickering across the crowdpossibly towards where Sony management was sitting. Risking heat for showing that kind of gratitude signals he's a leader aware of how the game industry is a shared ecosystem, even when companies are direct competitors.Image via Team Asobi/Sony Interactive Entertainment.Image via Team Asobi/Sony Interactive Entertainment.That canniness was on display at DICE too. He noted that while Astro Bot had already picked up many Game of the Year awards, the DICE award was one awarded by his peers in game developmentan honor handed down with understanding of the kind of work it took to produce a bitesized video game bursting with joy. "It's important that we made a game that's quite compact, that is not trying to be too big," he said. "I recognize the fact that it means it's a deeper message than we thought we had sent."What message was that? "Fun doesn't have to come in a large size," he said. "It's not about volume. I think we believed [that] early on, and now after this, even more so.""It's going to be a drive for us to keep things simple and really focus on quality over quantity.""Quality over quantity" is a major struggle for the video game industryDon't take Doucet's words as some knock on the bigger-budget hits Astro Bot has soared past. Again, a director who goes out of his way to thank his direct competitors is someone making clear he doesn't view the business as a knock-down-drag-out brawl, and many of Sony's titanic games are honored in Astro Bot as collectible pals with cute outfits players can encounter on their adventure.But it certainly is a gentle nudge against the game industry's direction since the heyday of the 3D platformers that inspired Astro Bot. As Game Developer's resident Star Wars freak, the history of Star Wars games is a useful measuring stick here. Star Wars Jedi Knight II: Jedi Outcast took about a year-and-a-half to make. Star Wars Jedi: Survivor took about three years, not counting the foundation it benefitted from in its predecessor Jedi: Fallen Order. Star Wars Outlaws took around four years. Each game is stuffed to the brim with content and technical work that required larger and larger teamsbut how much of that work is being appreciated by players?Outlaws' huge budget made it all the more painful when it debuted to relatively soft sales. When large games made by large teams don't hit, that's a massive amount of money spent for little return.Astro Bot wasn't at risk of falling into that trap. Former Gamesindustry.biz editor Christopher Dring reported in 2024 that it was developed in around three years by a team of roughly 60 people. That makes its 1.5 million copies sold in two months a very efficient return on investment for Sony. That's comparable to the reach of Dragon Age: The Veilguard, a game made by a much larger team in development for seven to 10 years depending on how you measure it.(An obligatory asidethe oversized budgets and development cycles for big-budget games are rarely the responsibility of the people working on them, and these comparisons are not an assessment of quality. That's something I have to write now because even some game industry professionals are being unbelievably weird about The Veilguard, a game that was well-received by the 1.5 million-plus players who did pick it up).Doucet has it right. The industry excitement for Astro Bot is a message with great meaning. The cynical and cash-minded will look at its wins and think they should pivot to cute 3D platformers that appeal to players of all-ages. The savvy and creative will see the same result and recognize there are a thousand great games you could make with the same budget and team sizeand they just need those who control of the purse strings to make it happen.
    0 Comments ·0 Shares ·3 Views
  • Nintendo announces a new Switch OLED bundle ahead of the Switch 2
    www.theverge.com
    The Nintendo Switch is now eight years old and the company is announcing what could be its final retail bundle push before the introduction of the Switch 2. The new bundle includes the Nintendo Switch OLED system with a digital copy of Super Mario Bros. Wonder and a 3-month Nintendo Switch Online individual membership for $349.99 a $67.98 savings if bought separately at regular prices.Nintendo is releasing the new bundle on March 10th to celebrate MAR10 Day. On March 9th, the company will also offer a bunch of its Mario titles on discount at retail stores including Best Buy, GameStop, Target, and Walmart. You can get Super Mario Odyssey, Mario Kart 8 Deluxe, Super Mario 3D World + Bowsers Fury, Super Mario RPG, Princess Peach: Showtime!, and Luigis Mansion 2 HD for $39.99 each, as well as Mario vs. Donkey Kong for $29.99.After eight years, the Nintendo Switch is on the cusp of outselling the Nintendo DS and becoming the companys best selling gaming system of all time. Nintendo says its taking risks with the Switchs successor as it proceeds with production. The company is planning a Switch 2-focused Nintendo Direct on April 2nd to share more details about the console.
    0 Comments ·0 Shares ·7 Views
  • The team behind Shredder’s Revenge and Streets of Rage 4 is making a fantasy beat ’em up
    www.theverge.com
    Dotemu, the studio and publisher best-known for retro revivals like Teenage Mutant Ninja Turtles: Shredders Revenge and Streets of Rage 4, has announced its next game and this time its a brand-new property. Called Absolum, and its a fantasy beat em up with roguelike elements; its being described as a rogue em up. Dotemu is developing the game alongside Guard Crush Games and Supamonk. The new game is expected to launch later this year on the Switch, PlayStation, and PC.It looks like Absolum will retain the core beat em up action the developers are known for, and will be playable both solo and co-op (local and online modes are both supported) with four different playable characters. Dotemu says that the game will include branching pathways to explore, quest to discover, intriguing characters to encounter, and a deep variety of challenging bosses in store.But instead of taking place in a familiar retro universe, Absolum is set in a new fantasy realm that the studio says is inspired by classics like Golden Axe. Heres the set-up:Absolum features an engaging narrative with themes of foiling an unbridled, dictatorial power and cheating Death itself. In the world of Talamh, a cataclysmic event caused by magic prompts the Sun King Azra to conquer all lands and sources of magic through brutal warfare, slaughtering any wizard unwilling to serve him.Talamhs hope now lies with a defiant band of rebels aided by a mysterious, mythical mentor known as Uchawi and the similarly powerful Root Sisters, who together oppose Azras pursuit of total power by wielding an ancient, forbidden magic. These mythical forces empower our rebels with astonishing magic as they stoke a resistance, fight Azras iron grip on Talamh, and discover the secret behind Azras ever-growing dominion.Its shaping up to be a busy year for Dotemu, which is also publishing the throwback Ninja Gaiden: Ragebound.See More:
    0 Comments ·0 Shares ·6 Views
  • Top 25 AI-Related Highlights from the WEF Future of Jobs 2025 Report
    towardsai.net
    Top 25 AI-Related Highlights from the WEF Future of Jobs 2025 Report 0 like March 4, 2025Share this postAuthor(s): Murat Girgin Originally published on Towards AI. Understanding the intersection of artificial intelligence and tomorrows workforce is the initial step to be prepared better for the future of work.This member-only story is on us. Upgrade to access all of Medium.Photo by Alex Knight on UnsplashAs we delve deeper into the transformative effects of artificial intelligence (AI) on the job market, the World Economic Forums Future of Jobs Report 2025 presents critical insights. This report sheds light on how AI and other technological trends are not just reshaping industries but also redefining the roles and skills required in the workforce. You might initially read my previous article focusing on key points of this report.The Future of Jobs 2025: What WEF Predicts for the Workforce of 2030Below are my top 25 highlights extracted from the report, providing a comprehensive overview of the current landscape and future projections in relation to AI.01 Demand for AI and Machine Learning Specialists: Projected significant net growth in demand for AI and machine learning specialists, with anticipated net growth rates reaching up to 82% by 2030 across various sectors.Job growth and decline (%), 20252030 source WEF02 Role of Sustainability Specialists: The integration of AI in roles focused on sustainability shows a rising trend, with demand expected to increase by approximately 30% as businesses prioritize environmental goals.03 Evolution of Engineering Roles: Industrial and production engineering Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comments ·0 Shares ·7 Views
  • Mastering the Basics: How Decision Trees Simplify Complex Choices
    towardsai.net
    Author(s): Miguel Cardona Polo Originally published on Towards AI. Trees playing Baseball by author using DALLE 3.Decision trees form the backbone of some of the most popular machine learning models in industry today, such as Random Forests, Gradient Boosted Trees, and XGBoost.Large Language Models (LLMs) are an exciting and very useful tool, but most real-world industry are not solved using LLMs. Instead, the majority of machine learning applications deal with structured, tabular data, such as large CSVs, Excel files, and databases. It is estimated that 70-80% of these tabular data tasks are solved using gradient boosting techniques like XGBoost, which rely on simple yet incredibly powerful decision trees.One of the biggest advantages of decision trees is their interpretability. Unlike modern black-box models, decision trees provide clear, step-by-step reasoning behind predictions. This transparency helps businesses understand their data better, make smarter decisions, and move beyond just predictions.In this article, youll gain a deep understanding of how decision trees work, including:The math behind decision trees (optional for those interested).Python code to build your own decision tree from scratch.Two hands-on examples (regression & classification) with step-by-step calculations, showing exactly how a decision tree learns.Dont miss these detailed walkthroughs to solidify your understanding!Concept of Decision TreesA decision tree is like a flowchart used to make decisions. It starts at a single point (called the root node) and splits into branches based on questions about the data. At each step, the tree asks a question like is the value greater than X? or does it belong to category Y. Based on the answer, it moves down a branch to the next question (called the decision nodes).This process continues until the data reaches a final point (called a leaf) which gives the decision or prediction this could be a Yes/No, a specific class, or a continuous number.Take a look at this decision tree used to predict which students will pass an exam, it's based on the number of hours the student studied, their hours of sleep the day before the exam, and their previous grade.Flow chart of decision tree used to predict which students will pass an exam. Image by author.Each leaf node represents a group of data points that have similar characteristics and therefore are given the same prediction (Pass or Fail). For example, students who have studied between 2 to 6 hours, and have slept more than 6, are a similar group of students (from what was seen in the training data), and therefore the decision tree predicts theyll pass the exam.Note that decisions can be made both on numerical data, like the hours slept, and on categorical data, like the previous grade achieved by the student. This is why decision trees are so popular in tabular data, such as spreadsheets and databases, as these can contain both types.If you are wondering how this flowchart is translated into data, we can plot it into a graph. The hours of sleep and study are represented as axes, and the previous grade as a cross (failed previous exam) or a circle (passed previous exam). We can place in the graph some example students for which we want to predict their next exam grade, the position of the crosses and circles in the graph indicate the hours of sleep and study for that student.Partitioned graph of decision tree to predict which students will pass an exam. Image by author.You can check that the following graph represents the same decision tree as the flowchart does, where the blue dashed lines are the decision boundaries (thresholds) and each highlighted section represents a leaf node from the decision tree.Theres an area left unhighlighted as the prediction under those conditions is based on the students last exam, so only those students who passed their last exam are predicted to pass.Now lets look into how decision trees choose the questions and the numbers (thresholds on features) that make it an accurate prediction model.How Decision Trees LearnAs mentioned earlier, the goal is to split the data into smaller groups, so that similar data points are grouped together. Decision trees do this by asking questions and using thresholds (numbers or categories) on the training data.A split in a decision tree is a point where the data is divided based on a specific feature and threshold, creating branches. For example, in the case discussed earlier, one feature was the number of hours a student slept, with a threshold of less than 2 hours. This split created a branch grouping students who slept less than two hours. These are predicted to fail their next exam.To choose the best split decision trees attempt all possible splits (features and thresholds) and pick the one with the lowest impurity, a value that indicates how mixed or diverse the data in a group is. Lower impurity means the group has similar data, which is the aim of the learning process.Impurity MeasureIts named impurity measure because it captures the diversity in a group. For example, if you have a basket with fruits, and it only contains apples, there is no diversity, the basket is pure therefore the impurity is low. On the other hand, if the basket has a mix of apples, oranges, and bananas, it has a high diversity and therefore a high impurity.There are impurity measures specific for regression tasks, where we predict a continuous number, and for classification tasks, where the target is a class. Here is one example for each.Formula for the classification impurity measure Entropy, and for the regression impurity measure Variance. Image by author.If you are interested in the intuition behind these formulas and fancy some example calculations, the section below is for you, otherwise feel free to skip it.Impurity Measure A Deeper LookFirst, lets build intuition on the Entropy formula by understanding how different splits yield higher or lower Entropy values. Consider two splits, the first one, Split A, has 3 reds and 2 purple balls. The second split, Split B, has 4 reds and 1 purple. Which one has a lower impurity?Comparison of how two splits yield different entropies and how they are calculated. Image by author.The winner is Split B. The 5 balls are more similar to each other in Split B as there is a better division between red and purple balls.We can also visualise these two splits by graphing the Entropy formula and checking where they lie on the graph.Graph showing how Entropy (impurity) changes with different proportions. Image by author.You can check that Split B has a lower impurity than Split A, as the latter has its values higher in the impurity axis. Splits with lower impurity will have a large proportion and another very small one, yielding a low impurity as the additions involve low values. Splits with more equal proportions, around the 0.5 mark, will be closer to the inflection point of the graph (higher up in the impurity scale).The example below shows how the impurity would be calculated for a candidate split when training a decision tree.Example of entropy calculation for a hypothetical split. Image by author.Note there are some steps that are not expected from the formula. Since the decision boundary (split) creates two groups (left and right), each requires an entropy calculation. As all splits are not created equally, we use the weighted sum to give the appropriate importance to each, meaning that those splits with more examples are more representative of the data, so these have a higher weight in the impurity calculation.Advanced readers might be wondering what is the need of the logarithm if we could get a similar effect by squaring the proportion. This version of an impurity measure exists, its called the Gini Impurity. Here is the graph comparing the impurity of both versions at different proportions.Graph comparing Entropy against Gini Impurity. X-axis is the proportion, Y-axis is the impurity. Image by author.Gini Impurity uses squared probabilities, which gives a smoother curve and often prefers larger, more dominant classes.Entropy has a logarithmic curve, so it reacts more strongly to changes in smaller class probabilities (notice the earlier bump) potentially leading to different, sometimes more balanced, splits.Now, lets look at the Variance formula for decision trees in Regression tasks. This time since there are no classes, we wont be basing our impurity on proportions but rather on how much the mean of the group differs from the target value.The following example shows the calculations when looking at a potential split for regression, this time on house prices. Similar to the classification task, the split will generate two groups and a calculation on each group is performed to get the total impurity.But how do you prevent overfitting?You might have noticed that if you keep making splits looking to minimise impurity you will end up splitting the data so much, that it will isolate every single data point. This will create a massive tree that branches into leaf nodes that each represent just one sample from the training data. This defeats the whole purpose of the decision tree as you wouldnt be able to generalise on unseen data. To prevent this sort of overfitting we can introduce a stopping criteria that stops the decision tree from growing given certain conditions.Stopping CriteriaThe stopping criteria is a set of rules that prevent the tree from growing too large and overfit the data. There are many criteria that can be used in decision trees and most can be used together during training. The following is a non-exhaustive list.Maximum Depth ReachedThe tree stops growing when it reaches a set maximum depth (number of splits from root to leaf), it prevents overly complex trees.Minimum Samples to SplitA node must have at least a certain number of samples to be split further. This prevents splitting small, unreliable groups.Minimum Gain in InformationA split must reduce impurity by at least a certain amount to be accepted, otherwise, that branch finalises its splitting by becoming a leaf node.Maximum Number of Nodes/LeavesLimits how many total nodes or leaves the tree can have. It prevents excessive growth and memory usage.The values that we set these rules to are hyper-parameters, meaning they are values we declare before training, and dictate the way in which the decision tree learns. Modifying these values will have an impact on the performance of the decision tree, therefore they must be tuned to achieve the desired performance.Hyper-parameter tuning is outside the scope of this article, I will issue an article on this soon, but if youre desperate to apply it to learning trees you can read this article by GeeksForGeeks on how to perform hyper-parameter tuning on decision trees using Python.Having covered how decision trees work and learn, we may now look into some worked examples with code.Worked example for ClassificationLets start with the life-long question: When should I bring an umbrella? Consider the following data of days where an umbrella was successfully brought or not.Table of data for the task of deciding when to take an umbrella. Image by author.A quick glance at this data reveals that it's always good to bring an umbrella when it's raining except when there are extreme wind speeds, that will make you take off into infinity.You can choose any classification impurity measure, but for now lets use Gini Impurity.Formula of Gini Impurity. Image by authorThe first thing we require in our code is the ability to read the data. We are using words to describe the weather conditions, but to operate on these we need to change them into numbers. This is where we use the function load_data_from_csv.import numpy as npimport csvdef load_data_from_csv(path): """ Required to turn our worded data into usable numbers for the decision tree. """ # X is the feature matrix (upper case letter suggests its a matrix and not a vector) # y is the target variable vector (what we want to measure) X, y = [], [] condition_mapping = {"sunny": 0, "cloudy": 1, "rainy": 2} with open(path, newline='') as csvfile: reader = csv.DictReader(csvfile) for row in reader: X.append([condition_mapping[row['conditions']], float(row['wind'])]) y.append(1 if row['umbrella'] == "yes" else 0) return np.array(X), np.array(y)Now we need a representation of our tree, as well as our measures of impurity. The TreeNode class represents the decision tree. When the tree expands into two different branches (child nodes), these will also be subtrees and therefore also part of the class TreeNode.The weighted_impurity function does the same as we explained earlier on the impurity measure deep dive, which is getting the weighted impurity so that if we have more samples on one side, this one gets more importance than the other, less populated side.class TreeNode: def __init__(self, feature=None, threshold=None, left=None, right=None, value=None): self.feature = feature # Feature to split on self.threshold = threshold # Threshold for the split self.left = left # Left child node self.right = right # Right child node self.value = value # Value if this is a leaf nodedef gini_impurity(y): classes, counts = np.unique(y, return_counts=True) probabilities = counts / len(y) return 1 - np.sum(probabilities ** 2)def weighted_impurity(left_y, right_y, impurity_function): n = len(left_y) + len(right_y) left_weight = len(left_y) / n right_weight = len(right_y) / n return ( left_weight * impurity_function(left_y) + right_weight * impurity_function(right_y) )We can represent a tree and find the impurity of a split, but we must find what is the best split. For this we must iterate over all features and values to find which threshold yields the lowest impurity and therefore shows the best split.The first for-loop iterates over the features, and the second for-loop iterates over the values of that feature. The middle number between two values is chosen as a threshold, and the impurity is calculated. This is repeated for all values and all features. When the best threshold for the best feature is found, this will become the first split.FEATURES = {0: "conditions", 1: "wind_speed"}def find_best_split(X, y, impurity_function): best_feature = None best_threshold = None best_impurity = float('inf') # iterate over features for feature_idx in range(X.shape[1]): sorted_indices = np.argsort(X[:, feature_idx]) X_sorted = X[sorted_indices, feature_idx] y_sorted = y[sorted_indices] # iterate over values for i in range(1, len(X_sorted)): if X_sorted[i] == X_sorted[i - 1]: continue threshold = (X_sorted[i] + X_sorted[i - 1]) / 2 left_y = y_sorted[:i] right_y = y_sorted[i:] split_impurity = weighted_impurity(left_y, right_y, impurity_function) if split_impurity < best_impurity: best_feature = feature_idx best_threshold = threshold best_impurity = split_impurity best_feature_word = FEATURES[best_feature] print(f"Best Feature: {best_feature_word}") print(f"Best Threshold: {best_threshold}") print(f"Best Impurity: {best_impurity}\n") return best_feature, best_threshold, best_impurityTo build the tree, the process of finding the best split is repeated until any of the stopping criteria is met. For each split, a node is added to the tree. Here we can also choose the stopping criteria, in this case:Maximum depth = 5Minimum samples split = 2Minimum impurity decrease = 1e-7def build_tree(X, y, impurity_function, depth=0, max_depth=5, min_samples_split=2, min_impurity_decrease=1e-7): if len(y) < min_samples_split or depth >= max_depth or impurity_function(y) < min_impurity_decrease: leaf_value = np.bincount(y).argmax() return TreeNode(value=leaf_value) best_feature, best_threshold, best_impurity = find_best_split(X, y, impurity_function) if best_feature is None: leaf_value = np.bincount(y).argmax() return TreeNode(value=leaf_value) left_indices = X[:, best_feature] <= best_threshold right_indices = X[:, best_feature] > best_threshold left_subtree = build_tree(X[left_indices], y[left_indices], impurity_function, depth + 1, max_depth, min_samples_split, min_impurity_decrease) right_subtree = build_tree(X[right_indices], y[right_indices], impurity_function, depth + 1, max_depth, min_samples_split, min_impurity_decrease) return TreeNode(feature=best_feature, threshold=best_threshold, left=left_subtree, right=right_subtree)With the chosen stopping criteria the decision tree ends after two iterations. These are the best splits it finds.# 1st IterationBest Feature: conditionsBest Threshold: 1.5Best Impurity: 0.1875# 2nd IterationBest Feature: wind_speedBest Threshold: 40.0Best Impurity: 0.0It immediately found that the best first split would be to only consider rainy conditions to get an umbrella. This is why the best feature is the conditions, and the best threshold is the conditions above 1.5, so only rainy (rainy = 2) as sunny = 0 and cloudy = 1. This yielded the lowest impurity at 0.1875.The next best decision is to stop taking an umbrella at high wind speeds, in this case at 40 km/h. This finished the learning process as it achieved an impurity of 0.Decision tree on when to take an umbrella. Image by authorWorked example for RegressionFollowing from the example on house price predictions, lets code the regression decision tree using an extended version of that data.Table of data for house prices. Image by author.We will slightly modify the data loading function to ingest the housing data.def load_data_from_csv(filename): X, y = [], [] with open(filename, newline='') as csvfile: reader = csv.DictReader(csvfile) for row in reader: X.append([float(row['size']), float(row['num_rooms'])]) y.append(float(row['price'])) return np.array(X), np.array(y)The TreeNode class remains exactly the same, but since we are now looking at a regression task, instead of classification, the impurity measure will be different in this occasion: Variance.def variance(y): if len(y) == 0: return 0 return np.var(y)def weighted_variance(left_y, right_y): n = len(left_y) + len(right_y) return (len(left_y) / n) * variance(left_y) + (len(right_y) / n) * variance(right_y)Well still use the same algorithm to find the best split for regression tasks.The difference between building a regression tree and a classification tree lies in the leaf nodes and it's really subtle. When you reach the leaf node at a classification tree your output should be one of the possible classes, this is why we use np.bincount(y).argmax(), as this returns the class that appears the most in that final group. This way when we have reached the end of the tree, where we must make a prediction, and we are left with an impure group of several classes, we choose the most frequent/popular one.This is different from regression trees because the output is a continuous number. So, instead of taking the most frequent class, we take the mean of all the numbers we have in the remaining group. Hence the use of np.mean(y).def build_regression_tree(X, y, impurity_function, depth=0, max_depth=5, min_samples_split=2, min_variance_decrease=1e-7): if len(y) < min_samples_split or depth >= max_depth or impurity_function(y) < min_variance_decrease: return TreeNode(value=np.mean(y)) best_feature, best_threshold, best_variance = find_best_split(X, y) if best_feature is None: return TreeNode(value=np.mean(y)) left_indices = X[:, best_feature] <= best_threshold right_indices = X[:, best_feature] > best_threshold left_subtree = build_regression_tree(X[left_indices], y[left_indices], depth + 1, max_depth, min_samples_split, min_variance_decrease) right_subtree = build_regression_tree(X[right_indices], y[right_indices], depth + 1, max_depth, min_samples_split, min_variance_decrease) return TreeNode(feature=best_feature, threshold=best_threshold, left=left_subtree, right=right_subtree)This time we have three iterations before we meet the stopping criteria.Note that in regression tasks, the impurity measure is heavily influenced by the range of the target variable. In this example, house prices range from 150 (lowest) to 330 (highest). A dataset with a larger range will naturally have a higher impurity value compared to one with a smaller range, simply because variance scales with the spread of the data. However, this does not mean that a dataset with a higher impurity produces better splits than one with a lower impurity. Since they represent different distributions, each dataset should be evaluated independently based on how well the feature splits reduce impurity relative to its own scale.Best Feature: sizeBest Threshold: 95.0Best Impurity: 768.0Best Feature: sizeBest Threshold: 75.0Best Impurity: 213.33333333333334Best Feature: sizeBest Threshold: 115.0Best Impurity: 200.0An interesting finding is that the number of rooms is found to not be a good feature to base the splitting on. If you look at the data again, you will notice a high correlation between the price and the size of the house, they almost seem to increase by a steady amount. This is why the size yielded the lowest impurity and why the size is chosen to be the best feature in every iteration.Decision tree on house prices. Image by author.ConclusionsDecision trees are a fundamental component of powerful machine learning models like XGBoost, as they offer high predictive performance and excellent interpretability.As seen in the examples, the ability to recursively split data based on the most informative features makes them highly effective at capturing complex patterns, while their structured decision-making process provides clear insights into model behaviour.Unlike black-box models, decision trees allow us to understand why a prediction was made, which is crucial in domains like finance and healthcare. This balance of power, efficiency, and explainability makes decision trees and their boosted ensembles essential tools in modern machine learning.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comments ·0 Shares ·8 Views
  • Marvel Rivals' Clone Rumble Mode Will Let Players Live Out Their Wildest Mister Fantastic Fantasies
    www.ign.com
    NetEase Games has confirmed that its previously leaked Marvel Rivals Clone Rumble mode is officially on the way, with plans to drop it later this week.The hero-shooter studio revealed plans and a trailer for the offshoot event today, pulling back the curtain on a gameplay option for those who occasionally need a break from the cutthroat chaos of quickplay and competitive modes. As you might have guessed from the name, Marvel Rivals Clone Rumble mode removes hero limits by asking each team to choose a single hero to play as. That means the dream to see a 6v6 match between a full lobby of Mister Fantastics will be fully realized come launch on March 7, 2025.PlayClone Rumbles debut trailer teases how players can experiment with new strategies. We know 12 Reed Richards will open the door for some hilarious clips, but the mode will also allow players to turn maps into a greenhouse with a team of Groots. Youll also be able to make Marvel Rivals look more like Call of Duty if your teams pick Punisher as their chosen clone.Fans have taken to social media to discuss the other possibilities, too, including what it might look like to use Doctor Strange in the mode. Although Marvel Rivals Sorcerer Supreme likely wont be able to completely pollute arenas with portals, theres no telling how Loki, Captain America, and Peni Parker matches will play out.Comment byu/Bisentinel from discussion inmarvelrivalsElsewhere in todays Clone Rumble trailer is the reveal of a new Western costume for Black Widow, which may also come with what appears to be a cowboy-inspired MVP animation. Players will be able to earn the outfit for free as part of a board game event, Galactas Cosmic Adventure. The activity will see Marvel Rivals players rolling dice to move a game piece across a board to collect more in-game rewards, including Units and other customization options.It's unclear how long Clone Rumble and Galactas Cosmic Adventure will be available to players after they launch this week. While we wait to learn more, you can read up on why a few Marvel Rivals players think Season 2 will focus on the Hellfire Gala. You can also learn more about the recent Season 1.5 update, which injected a handful of great Moon Knight voice lines and more reasons to be afraid of flying heroes.Michael Cripe is a freelance contributor with IGN. He's best known for his work at sites like The Pitch, The Escapist, and OnlySP. Be sure to give him a follow on Bluesky (@mikecripe.bsky.social) and Twitter (@MikeCripe).
    0 Comments ·0 Shares ·7 Views
  • PlayStation Announces a New Beta Program to Let Players Test Upcoming PS5 and PC Games and More
    www.ign.com
    Sony has announced a brand-new Beta Program at PlayStation that aims to be a centralized place for players to register their interest in trying out a "range of future PlayStation betas," including testing out participating PS5 and PC games.As detailed on PlayStation.Blog, a single registration will be all players need to express interest in trying out participating PS5 and PC games, PS5 console features, PlayStation App features, and user experience features on PlayStation.com.Registration will begin later today globally right here, and PlayStation recommends checking back regularly until the sign-ups are open.State of Play February 2025 GamesEvery game featured in Sony's PlayStation State of Play on February 12, 2025.See AllThe Beta Program at PlayStation is free to join and you'll only need to sign up once to be able to express interest in any or all of the listed beta types. It's important to note you aren't guaranteed access to every beta, but your interest in trying them will be consistent.To be part of this program, players will need a valid PlayStation Network account in good standing, they must live in a region where the Beta Program at PlayStation is available, and they must be of legal age in their region.For more on the future of PlayStation, check out all the big announcements from February's PlayStation State of Play and the news that Dragon Age: The Veilguard is headlining the March 2025 lineup for PlayStation Plus just four months after it launched.Have a tip for us? Want to discuss a possible story? Please send an email to newstips@ign.com.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst and on TikTok.
    0 Comments ·0 Shares ·7 Views