• L'ère de la recherche alimentée par l'IA bouleverse complètement le marketing, et c'est inacceptable ! Comment peut-on accepter que des algorithmes prennent le contrôle des décisions d'achat des consommateurs ? Avant, nous avions le droit de lire plusieurs avis, de comparer, de réfléchir. Maintenant, tout est réduit à une réponse rapide, et ce sont les grandes entreprises qui en profitent, laissant les petites en ruine. La recherche alimentée par l'IA n'est pas une avancée, c'est une régression qui tue la diversité et la transparence. Les marketeurs doivent se battre pour exister dans ce monde déformé. Réveillez-vous, car votre voix compte encore !

    #RechercheIA #Marketing #
    L'ère de la recherche alimentée par l'IA bouleverse complètement le marketing, et c'est inacceptable ! Comment peut-on accepter que des algorithmes prennent le contrôle des décisions d'achat des consommateurs ? Avant, nous avions le droit de lire plusieurs avis, de comparer, de réfléchir. Maintenant, tout est réduit à une réponse rapide, et ce sont les grandes entreprises qui en profitent, laissant les petites en ruine. La recherche alimentée par l'IA n'est pas une avancée, c'est une régression qui tue la diversité et la transparence. Les marketeurs doivent se battre pour exister dans ce monde déformé. Réveillez-vous, car votre voix compte encore ! #RechercheIA #Marketing #
    How AI Powered Search Is Changing Marketing, and What You Can Do About It.
    Imagine you need a new washer/dryer. In the past you’d read through several review sites, but now an AI-powered search gives you a single, convenient answer in seconds. AI-powered search is fundamentally altering how customers find information, and t
    1 Kommentare 0 Anteile
  • How jam jars explain Apple’s success

    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #how #jam #jars #explain #apples
    UXDESIGN.CC
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a category (choices) and the average customer review (satisfaction).Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Kommentare 0 Anteile
  • Dev snapshot: Godot 4.5 dev 5

    Replicube
    A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By:
    Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC. You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept, but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users. Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node
    The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract
    Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler.Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default.Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature. Capry brings bent normal maps to further enhance specular occlusion and indirect lighting. Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices. More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player.Animation: Add animation filtering to animation editor.Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling.Core: Add --scene command line argument.Core: Overhaul resource duplication.Core: Use Grisu2 algorithm in String::num_scientific to fix serializing.Editor: Add “Quick Load” button to EditorResourcePicker.Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions.Editor: Add named EditorScripts to the command palette.GUI: Add file sort to FileDialog.I18n: Add translation preview in editor.Import: Add Channel Remap settings to ResourceImporterTexture.Physics: Improve performance with non-monitoring areas when using Jolt Physics.Porting: Android: Add export option for custom theme attributes.Porting: Android: Add support for 16 KB page sizes, update to NDK r28b.Porting: Android: Remove the gradle_build/compress_native_libraries export option.Porting: Web: Use actual PThread pool size for get_default_thread_pool_size.Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot.Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects.Rendering: FTI - Optimize SceneTree traversal.Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET buildincludes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executableshave been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click. Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report.SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now
    #dev #snapshot #godot
    Dev snapshot: Godot 4.5 dev 5
    Replicube A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By: Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC. You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept, but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users. Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler.Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default.Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature. Capry brings bent normal maps to further enhance specular occlusion and indirect lighting. Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices. More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player.Animation: Add animation filtering to animation editor.Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling.Core: Add --scene command line argument.Core: Overhaul resource duplication.Core: Use Grisu2 algorithm in String::num_scientific to fix serializing.Editor: Add “Quick Load” button to EditorResourcePicker.Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions.Editor: Add named EditorScripts to the command palette.GUI: Add file sort to FileDialog.I18n: Add translation preview in editor.Import: Add Channel Remap settings to ResourceImporterTexture.Physics: Improve performance with non-monitoring areas when using Jolt Physics.Porting: Android: Add export option for custom theme attributes.Porting: Android: Add support for 16 KB page sizes, update to NDK r28b.Porting: Android: Remove the gradle_build/compress_native_libraries export option.Porting: Web: Use actual PThread pool size for get_default_thread_pool_size.Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot.Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects.Rendering: FTI - Optimize SceneTree traversal.Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET buildincludes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executableshave been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click. Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report.SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now #dev #snapshot #godot
    GODOTENGINE.ORG
    Dev snapshot: Godot 4.5 dev 5
    Replicube A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By: Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC (Bluesky, Twitter). You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept (see our recent XR blogpost highlighting the latest Godot XR Game Jam), but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users (GH-67777). Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there (over 20× decrease in load times in the TPS demo)!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler (macOS with Xcode / Command Line Tools installed).Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default (GH-106319).Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature (GH-102330). Capry brings bent normal maps to further enhance specular occlusion and indirect lighting (GH-89988). Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices (GH-106267). More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player (GH-103584).Animation: Add animation filtering to animation editor (GH-103130).Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling (GH-102360).Core: Add --scene command line argument (GH-105302).Core: Overhaul resource duplication (GH-100673).Core: Use Grisu2 algorithm in String::num_scientific to fix serializing (GH-98750).Editor: Add “Quick Load” button to EditorResourcePicker (GH-104490).Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions (GH-96611).Editor: Add named EditorScripts to the command palette (GH-99318).GUI: Add file sort to FileDialog (GH-105723).I18n: Add translation preview in editor (GH-96921).Import: Add Channel Remap settings to ResourceImporterTexture (GH-99676).Physics: Improve performance with non-monitoring areas when using Jolt Physics (GH-106490).Porting: Android: Add export option for custom theme attributes (GH-106724).Porting: Android: Add support for 16 KB page sizes, update to NDK r28b (GH-106358).Porting: Android: Remove the gradle_build/compress_native_libraries export option (GH-106359).Porting: Web: Use actual PThread pool size for get_default_thread_pool_size() (GH-104458).Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot (GH-59595).Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects (hands, weapons, tools close to the camera) (GH-93142).Rendering: FTI - Optimize SceneTree traversal (GH-106244).Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET build (marked as mono) includes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executables (both the editor and export templates) have been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click (GH-106373). Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report (e.g. if something that worked fine in previous 4.x releases, but no longer works in this snapshot).SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now
    0 Kommentare 0 Anteile
  • DeepSeek’s latest AI model a ‘big step backwards’ for free speech

    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #deepseeks #latest #model #big #step
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #deepseeks #latest #model #big #step
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.(Photo by John Cameron)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Kommentare 0 Anteile
  • Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support

    Taras Buria

    Neowin
    @TarasBuria ·

    May 23, 2025 05:46 EDT

    Valve's SteamOS 3.7.8 is now out in the Stable Channel. This release adds many improvements and new fixes for the Steam Deck and some other important changes, such as better support for the ASUS ROG Ally, the original Lenovo Legion Go, and the upcoming Lenovo Legion Go S. Plus, users can now test SteamOS on other AMD-powered handheld consoles.
    Useful new features for the Steam Deck in SteamOS 3.7.8 include the ability to set the charge limit at 80% to prevent battery degradation. This change will help preserve the battery life when the console is always connected or rarely has its battery fully depleted. Also, the operating system now supports frame limiting on screenswith Variable Refresh Ratesupport and the Proteus Byowave controller.
    As for other handheld consoles, SteamOS' recovery image now works with the Lenovo Legion Go S. If you want to try SteamOS on another handheld with an AMD processor, you can use the updated recovery image by following the instructions published on the official website.

    Other changes in SteamOS 3.7.8 include fixes for issues with hanging controllers and non-working Switch Pro Controller gyros, Bluetooth audio fixes and a new battery level indicator for supported Bluetooth devices, AMD P-State CPU frequency control support, fixes for surround sound, improved compatibility for certain displays, and patches for performance regressions in No Rest for the Wicked
    Finally, Valve updated SteamOS to a newer Arch Linux base, Linux kernel, the Mesa graphics driver base, Plasma for desktop mode, and more. You can find the complete changelog for SteamOS 3.7.8 in a post on the official Steam website.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #valve #releases #steamos #with #new
    Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support Taras Buria Neowin @TarasBuria · May 23, 2025 05:46 EDT Valve's SteamOS 3.7.8 is now out in the Stable Channel. This release adds many improvements and new fixes for the Steam Deck and some other important changes, such as better support for the ASUS ROG Ally, the original Lenovo Legion Go, and the upcoming Lenovo Legion Go S. Plus, users can now test SteamOS on other AMD-powered handheld consoles. Useful new features for the Steam Deck in SteamOS 3.7.8 include the ability to set the charge limit at 80% to prevent battery degradation. This change will help preserve the battery life when the console is always connected or rarely has its battery fully depleted. Also, the operating system now supports frame limiting on screenswith Variable Refresh Ratesupport and the Proteus Byowave controller. As for other handheld consoles, SteamOS' recovery image now works with the Lenovo Legion Go S. If you want to try SteamOS on another handheld with an AMD processor, you can use the updated recovery image by following the instructions published on the official website. Other changes in SteamOS 3.7.8 include fixes for issues with hanging controllers and non-working Switch Pro Controller gyros, Bluetooth audio fixes and a new battery level indicator for supported Bluetooth devices, AMD P-State CPU frequency control support, fixes for surround sound, improved compatibility for certain displays, and patches for performance regressions in No Rest for the Wicked Finally, Valve updated SteamOS to a newer Arch Linux base, Linux kernel, the Mesa graphics driver base, Plasma for desktop mode, and more. You can find the complete changelog for SteamOS 3.7.8 in a post on the official Steam website. Tags Report a problem with article Follow @NeowinFeed #valve #releases #steamos #with #new
    WWW.NEOWIN.NET
    Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Valve releases SteamOS 3.7.8 with new features for Steam Deck and Lenovo Legion Go S support Taras Buria Neowin @TarasBuria · May 23, 2025 05:46 EDT Valve's SteamOS 3.7.8 is now out in the Stable Channel. This release adds many improvements and new fixes for the Steam Deck and some other important changes, such as better support for the ASUS ROG Ally, the original Lenovo Legion Go, and the upcoming Lenovo Legion Go S. Plus, users can now test SteamOS on other AMD-powered handheld consoles. Useful new features for the Steam Deck in SteamOS 3.7.8 include the ability to set the charge limit at 80% to prevent battery degradation. This change will help preserve the battery life when the console is always connected or rarely has its battery fully depleted. Also, the operating system now supports frame limiting on screens (internal and external) with Variable Refresh Rate (VRR) support and the Proteus Byowave controller. As for other handheld consoles, SteamOS' recovery image now works with the Lenovo Legion Go S. If you want to try SteamOS on another handheld with an AMD processor, you can use the updated recovery image by following the instructions published on the official website. Other changes in SteamOS 3.7.8 include fixes for issues with hanging controllers and non-working Switch Pro Controller gyros, Bluetooth audio fixes and a new battery level indicator for supported Bluetooth devices, AMD P-State CPU frequency control support, fixes for surround sound, improved compatibility for certain displays (TCL FireTV and Dell VRR-capable monitors), and patches for performance regressions in No Rest for the Wicked Finally, Valve updated SteamOS to a newer Arch Linux base, Linux kernel (6.11), the Mesa graphics driver base, Plasma for desktop mode (6.2.5), and more. You can find the complete changelog for SteamOS 3.7.8 in a post on the official Steam website. Tags Report a problem with article Follow @NeowinFeed
    0 Kommentare 0 Anteile
  • Multiple Linear Regression Analysis

    Implementation of multiple linear regression on real data: Assumption checks, model evaluation, and interpretation of results using Python.
    The post Multiple Linear Regression Analysis appeared first on Towards Data Science.
    #multiple #linear #regression #analysis
    Multiple Linear Regression Analysis
    Implementation of multiple linear regression on real data: Assumption checks, model evaluation, and interpretation of results using Python. The post Multiple Linear Regression Analysis appeared first on Towards Data Science. #multiple #linear #regression #analysis
    Multiple Linear Regression Analysis
    Implementation of multiple linear regression on real data: Assumption checks, model evaluation, and interpretation of results using Python. The post Multiple Linear Regression Analysis appeared first on Towards Data Science.
    0 Kommentare 0 Anteile
  • What Statistics Can Tell Us About NBA Coaches

    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success?

    This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience.

    Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates?

    The second theory is that internal candidatesare often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample.

    This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python.

    The Data

    The codeand dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory. 

    A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations, the team will keep them around.

    Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found.

    In addition, the following variables were collected:

    VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC, Previous AC, College, Player, Management, and Foreign.YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5.

    First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0.

    from google.colab import drive
    drive.mountimport pandas as pd
    pd.set_option#Bring in the dataset
    coach = pd.read_csv.iloccoach= coach.map)
    coach

    This prints a preview of what the dataset looks like:

    In total, the dataset contains 221 coaching hires over this time. 

    Descriptive Statistics

    First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches.

    #Create chart of coaching background
    import matplotlib.pyplot as plt

    #Count number of coaches per category
    counts = coach.value_counts#Create chart
    plt.barplt.titleplt.figtextplt.xticksplt.ylabelplt.gca.spines.set_visibleplt.gca.spines.set_visiblefor i, value in enumerate:
    plt.text)*100,1)) + '%' + '+ ')', ha='center', fontsize=9)
    plt.savefigprint.sum/len)*100,1)) + " percent of coaches are internal.")

    Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks.

    Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram.

    #Create histogram
    plt.histplt.titleplt.figtextplt.annotate', xy=, xytext=,
    arrowprops=dict, fontsize=9, color='black')
    plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.showcoach.sort_values#Calculate some stats with the data
    import numpy as np

    print) + " years is the median coaching tenure length.")
    print.sum/len)*100,1)) + " percent of coaches last five years or less.")
    print.sum/len*100,1)) + " percent of coaches last a year or less.")

    Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons.

    This can also be viewed as a survival analysis plot to see the drop-off at various points in time:

    #Survival analysis
    import matplotlib.ticker as mtick

    lst = np.arangesurv = pd.DataFramesurv= np.nan

    for i in range):
    surv.iloc=.sum/lenplt.stepplt.titleplt.xlabel')
    plt.figtextplt.gca.yaxis.set_major_formatter)
    plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.show

    Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group.

    #Create a boxplot
    import seaborn as sns

    sns.boxplotplt.titleplt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.xlabelplt.xticksplt.figtextplt.savefigplt.show

    There are some differences between the groups. Aside from management hires, previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant.

    Statistical Analysis

    First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA:

    #ANOVA
    import statsmodels.api as sm
    from statsmodels.formula.api import ols

    am = ols+ C', data=coach).fitanova_table = sm.stats.anova_lmprintThe results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized. 

    However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early.

    To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons.

    With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coachso this observation is dropped from the analysis.

    #Logistic regression
    coach3 = coach<2020]

    coach3.loc= np.wherecoach_type_dummies = pd.get_dummies.astypecoach_type_dummies.dropcoach3 = pd.concat#Drop foreign category / David Blatt since n = 1
    coach3 = coach3.dropcoach3 = coach3.loc!= "David Blatt"]

    print)

    x = coach3]
    x = sm.add_constanty = coach3logm = sm.Logitlogm.r = logm.fitprint)

    #Convert coefficients to odds ratio
    print) + "is the odds ratio for internal.") #Internal coefficient
    print) #Management
    print) #Player
    print) #Previous AC
    print) #College

    Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story.

    The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows:

    Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches.

    From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant?

    The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coachesspecifically:

    # Filter to management

    manage = coach3== 1]
    print)
    printThe filtered dataset contains just 6 hires—of which just oneis classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist.

    With a p-value of 0.202, the Internal variable comes the closest to statistical significance. Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just onemet the criteria for success.

    Conclusion

    In conclusion, this analysis was able to draw several key conclusions:

    Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons.

    The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience.

    If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon.

    Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant.

    Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either.

    Note: All images were created by the author unless otherwise credited.
    The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science.
    #what #statistics #can #tell #about
    What Statistics Can Tell Us About NBA Coaches
    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success? This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience. Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates? The second theory is that internal candidatesare often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample. This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python. The Data The codeand dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory.  A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations, the team will keep them around. Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found. In addition, the following variables were collected: VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC, Previous AC, College, Player, Management, and Foreign.YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5. First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0. from google.colab import drive drive.mountimport pandas as pd pd.set_option#Bring in the dataset coach = pd.read_csv.iloccoach= coach.map) coach This prints a preview of what the dataset looks like: In total, the dataset contains 221 coaching hires over this time.  Descriptive Statistics First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches. #Create chart of coaching background import matplotlib.pyplot as plt #Count number of coaches per category counts = coach.value_counts#Create chart plt.barplt.titleplt.figtextplt.xticksplt.ylabelplt.gca.spines.set_visibleplt.gca.spines.set_visiblefor i, value in enumerate: plt.text)*100,1)) + '%' + '+ ')', ha='center', fontsize=9) plt.savefigprint.sum/len)*100,1)) + " percent of coaches are internal.") Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks. Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram. #Create histogram plt.histplt.titleplt.figtextplt.annotate', xy=, xytext=, arrowprops=dict, fontsize=9, color='black') plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.showcoach.sort_values#Calculate some stats with the data import numpy as np print) + " years is the median coaching tenure length.") print.sum/len)*100,1)) + " percent of coaches last five years or less.") print.sum/len*100,1)) + " percent of coaches last a year or less.") Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons. This can also be viewed as a survival analysis plot to see the drop-off at various points in time: #Survival analysis import matplotlib.ticker as mtick lst = np.arangesurv = pd.DataFramesurv= np.nan for i in range): surv.iloc=.sum/lenplt.stepplt.titleplt.xlabel') plt.figtextplt.gca.yaxis.set_major_formatter) plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.show Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group. #Create a boxplot import seaborn as sns sns.boxplotplt.titleplt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.xlabelplt.xticksplt.figtextplt.savefigplt.show There are some differences between the groups. Aside from management hires, previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant. Statistical Analysis First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA: #ANOVA import statsmodels.api as sm from statsmodels.formula.api import ols am = ols+ C', data=coach).fitanova_table = sm.stats.anova_lmprintThe results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized.  However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early. To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons. With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coachso this observation is dropped from the analysis. #Logistic regression coach3 = coach<2020] coach3.loc= np.wherecoach_type_dummies = pd.get_dummies.astypecoach_type_dummies.dropcoach3 = pd.concat#Drop foreign category / David Blatt since n = 1 coach3 = coach3.dropcoach3 = coach3.loc!= "David Blatt"] print) x = coach3] x = sm.add_constanty = coach3logm = sm.Logitlogm.r = logm.fitprint) #Convert coefficients to odds ratio print) + "is the odds ratio for internal.") #Internal coefficient print) #Management print) #Player print) #Previous AC print) #College Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story. The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows: Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches. From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant? The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coachesspecifically: # Filter to management manage = coach3== 1] print) printThe filtered dataset contains just 6 hires—of which just oneis classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist. With a p-value of 0.202, the Internal variable comes the closest to statistical significance. Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just onemet the criteria for success. Conclusion In conclusion, this analysis was able to draw several key conclusions: Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons. The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience. If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon. Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant. Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either. Note: All images were created by the author unless otherwise credited. The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science. #what #statistics #can #tell #about
    TOWARDSDATASCIENCE.COM
    What Statistics Can Tell Us About NBA Coaches
    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success? This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience. Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates? The second theory is that internal candidates (though infrequently hired) are often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample. This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python. The Data The code (contained in a Jupyter notebook) and dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory.  A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations (whatever those may be), the team will keep them around. Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found. In addition, the following variables were collected: VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC (prior NBA head coaching experience), Previous AC (prior NBA assistant coaching experience, but no head coaching experience), College (head coach of a college team), Player (a former NBA player with no coaching experience), Management (someone with front office experience but no coaching experience), and Foreign (someone coaching outside of North America with no NBA coaching experience).YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5. First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0. from google.colab import drive drive.mount('/content/drive') import pandas as pd pd.set_option('display.max_columns', None) #Bring in the dataset coach = pd.read_csv('/content/drive/MyDrive/Python_Files/Coaches.csv', on_bad_lines = 'skip').iloc[:,0:6] coach['Internal'] = coach['Internal?'].map(dict(Yes=1, No=0)) coach This prints a preview of what the dataset looks like: In total, the dataset contains 221 coaching hires over this time.  Descriptive Statistics First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches. #Create chart of coaching background import matplotlib.pyplot as plt #Count number of coaches per category counts = coach['Type'].value_counts() #Create chart plt.bar(counts.index, counts.values, color = 'blue', edgecolor = 'black') plt.title('Where Do NBA Coaches Come From?') plt.figtext(0.76, -0.1, "Made by Brayden Gerrard", ha="center") plt.xticks(rotation = 45) plt.ylabel('Number of Coaches') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) for i, value in enumerate(counts.values): plt.text(i, value + 1, str(round((value/sum(counts.values))*100,1)) + '%' + ' (' + str(value) + ')', ha='center', fontsize=9) plt.savefig('coachtype.png', bbox_inches = 'tight') print(str(round(((coach['Internal'] == 1).sum()/len(coach))*100,1)) + " percent of coaches are internal.") Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks. Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram. #Create histogram plt.hist(coach['Years'], bins =12, edgecolor = 'black', color = 'blue') plt.title('Distribution of Coaching Tenure') plt.figtext(0.76, 0, "Made by Brayden Gerrard", ha="center") plt.annotate('Erik Spoelstra (MIA)', xy=(16.4, 2), xytext=(14 + 1, 15), arrowprops=dict(facecolor='black', shrink=0.1), fontsize=9, color='black') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.savefig('tenurehist.png', bbox_inches = 'tight') plt.show() coach.sort_values('Years', ascending = False) #Calculate some stats with the data import numpy as np print(str(np.median(coach['Years'])) + " years is the median coaching tenure length.") print(str(round(((coach['Years'] <= 5).sum()/len(coach))*100,1)) + " percent of coaches last five years or less.") print(str(round((coach['Years'] <= 1).sum()/len(coach)*100,1)) + " percent of coaches last a year or less.") Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons. This can also be viewed as a survival analysis plot to see the drop-off at various points in time: #Survival analysis import matplotlib.ticker as mtick lst = np.arange(0,18,0.5) surv = pd.DataFrame(lst, columns = ['Period']) surv['Number'] = np.nan for i in range(0,len(surv)): surv.iloc[i,1] = (coach['Years'] >= surv.iloc[i,0]).sum()/len(coach) plt.step(surv['Period'],surv['Number']) plt.title('NBA Coach Survival Rate') plt.xlabel('Coaching Tenure (Years)') plt.figtext(0.76, -0.05, "Made by Brayden Gerrard", ha="center") plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter(1)) plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.savefig('coachsurvival.png', bbox_inches = 'tight') plt.show Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group. #Create a boxplot import seaborn as sns sns.boxplot(data=coach, x='Type', y='Years') plt.title('Coaching Tenure by Coach Type') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.xlabel('') plt.xticks(rotation = 30, ha = 'right') plt.figtext(0.76, -0.1, "Made by Brayden Gerrard", ha="center") plt.savefig('coachtypeboxplot.png', bbox_inches = 'tight') plt.show There are some differences between the groups. Aside from management hires (which have a sample of just six), previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant. Statistical Analysis First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA: #ANOVA import statsmodels.api as sm from statsmodels.formula.api import ols am = ols('Years ~ C(Type) + C(Internal)', data=coach).fit() anova_table = sm.stats.anova_lm(am, typ=2) print(anova_table) The results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized.  However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early. To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons. With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coach (David Blatt) so this observation is dropped from the analysis. #Logistic regression coach3 = coach[coach['Year']<2020] coach3.loc[:, 'Success'] = np.where(coach3['Years'] > 5, 1, 0) coach_type_dummies = pd.get_dummies(coach3['Type'], prefix = 'Type').astype(int) coach_type_dummies.drop(columns=['Type_Previous HC'], inplace=True) coach3 = pd.concat([coach3, coach_type_dummies], axis = 1) #Drop foreign category / David Blatt since n = 1 coach3 = coach3.drop(columns=['Type_Foreign']) coach3 = coach3.loc[coach3['Coach'] != "David Blatt"] print(coach3['Success'].value_counts()) x = coach3[['Internal','Type_Management','Type_Player','Type_Previous AC', 'Type_College']] x = sm.add_constant(x) y = coach3['Success'] logm = sm.Logit(y,x) logm.r = logm.fit(maxiter=1000) print(logm.r.summary()) #Convert coefficients to odds ratio print(str(np.exp(-1.4715)) + "is the odds ratio for internal.") #Internal coefficient print(np.exp(1.0025)) #Management print(np.exp(-39.6956)) #Player print(np.exp(-0.3626)) #Previous AC print(np.exp(-0.6901)) #College Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story. The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows: Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches. From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant? The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23 (11.4%) were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coaches (management hires) specifically: # Filter to management manage = coach3[coach3['Type_Management'] == 1] print(manage['Success'].value_counts()) print(manage) The filtered dataset contains just 6 hires—of which just one (Steve Kerr with Golden State) is classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist. With a p-value of 0.202, the Internal variable comes the closest to statistical significance (though it still falls well short of a typical alpha of 0.05). Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just one (Erik Spoelstra of Miami) met the criteria for success. Conclusion In conclusion, this analysis was able to draw several key conclusions: Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons. The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience. If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon. Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant. Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either. Note: All images were created by the author unless otherwise credited. The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science.
    0 Kommentare 0 Anteile