• Customer feedback analysis: understanding the hidden truth

    Customer feedback analysis: understanding the hidden truth

    Customer feedback seems to reflect preferences, but deeper analysis reveals a different reality. While tools like surveys and focus groups provide structured insights, they frequently fail to capture real customer behaviour. Social desirability, hypothetical scenarios, and the gap between reported preferences and actual decisions create a misleading picture of customer needs and expectations.Take the case of an electronics company testing new colours for CD players in a focus group. Participants enthusiastically selected vibrant colours. But when they could take a CD player home, every participant chose the standard black or grey version.This example shows that what customers claim to want and what they actually choose, can be two different things. That is why the analysis of customer feedback needs to go beyond just listening to what people say.

    Pitfalls of NPS surveys

    Another example of customer feedback is Net Promoter Score. NPS is a widely used tool in customer feedback analysis, measuring satisfaction and loyalty. We strongly believe in the value of NPS, but only if customers share their scores voluntarily and without pressure.Sadly, many businesses aggressively push for high scores, sometimes even pressuring participants to rate them a 9 or 10, suggesting bonuses, commissions, and company targets depend on a great rating. This manipulation distorts the results and reduces the reliability of NPS. The real value of NPS lies in the follow-up question: Why did you give a certain score? A 3/10 rating without an explanation offers little actionable insight.Additionally, relying solely on NPS scores without any context can be misleading. A high score might indicate satisfaction, but does it correlate with repeat purchases or customer engagement? Similarly, a low score without behavioural context might not mean you’re losing customers. To get a clearer picture, effective customer feedback analysis should integrate multiple data sources.

    Beware of SINGLE-SOURCE FEEDBACK

    One of the most common mistakes in customer feedback analysis is relying on a single data source. Many companies base their insights exclusively on survey responses without integrating other crucial data points such as:Customer service interactionsPurchase history and brand engagementWebsite and app behaviourSocial media engagementFor example, consider survey participants who gave feedback but have not interacted with your brand in over a year. How valuable is their input? Without cross-referencing different sources, it is quite challenging to:Determine which research method is most appropriate for a specific questionIntegrate multiple insights into a coherent strategyTake meaningful action based on the feedbackPrioritise which feedback is truly valuableA thorough customer feedback analysis ensures that brands don’t make decisions based on isolated responses but instead consider real behavioural patterns and trends.

    "Customer feedback alone isn’t enough, you need behavioural data to truly understand what drives decisions and loyalty."

    Hans Palmers, Managing Partner and Digital & e-Commerce Strategist

    Keep it SIMPLE

    Traditional surveys often make respondents overthink their answers, leading to biased results. A smarter approach is to use implicit A/B testing or pairwise comparison to gather more reliable insights. For example: instead of asking customers to evaluate multiple options at once, you can present them two choices at a time. This makes decision-making easier and more instinctive while reducing the pressure of finding the ‘right’ answer.

    To much data, NOT ENOUGH INSIGHT

    The rise of new tools and methodologies, like eye tracking, behavioural data, feedback forms, and focus groups, has led to an overwhelming amount of data. However, more data does not automatically translate to better insights. Without structured customer feedback analysis, businesses often find themselves drowning in numbers without clear direction on how to act.The paradox is that in challenging times, the thirst for knowledge increases. Businesses collect more data in search of clarity, but all too often, they end up with noise instead of answers. That’s why it’s essential to refocus on the core of the brand and its true customer experience.

    "Collecting customer feedback is easy. The real challenge is turning it into meaningful actions."

    Hans Palmers, Managing Partner and Digital & e-Commerce Strategist

    How can June2O help you?

    At June20, we believe customer feedback can be a great source to drive growth. We like to focus on methodology and insight rather than just tools. The key to unlocking valuable feedback isn’t the tool you use, it’s in how you frame the question, interpret responses, and integrate findings into a broader strategy. Our approach ensures that:The right methodologies are applied to gather meaningful insightsData sources are integrated logically to form a complete pictureInsights are transformed into clear, actionable recommendations with maximum impactRather than getting lost in data overload, we help brands cut through the noise and make informed, strategic decisions.Want useful customer feedback? Let’s talk

    BiographyHans Palmers

    Hans began his professional journey as a Web Developer and Mentor. After three years, he joined TBWA, where he led a team in digital solutions for a decade. Following this, he shared his digital expertise at KUL, EHSAL and Thomas More for over ten years, as one of the founding partners. Embarking on a new venture, Hans founded Mundo Digitalis, specialised in digital solutions, and successfully led the agency for over 11 years. Over all these years, Hans did pioneering work in e-commerce and online banking. Recently Mundo Digitalis has integrated with June20, where Hans holds the position of Managing Partner & E-Commerce Strategist.

    The post Customer feedback analysis: understanding the hidden truth appeared first on June20.
    #customer #feedback #analysis #understanding #hidden
    Customer feedback analysis: understanding the hidden truth
    Customer feedback analysis: understanding the hidden truth Customer feedback seems to reflect preferences, but deeper analysis reveals a different reality. While tools like surveys and focus groups provide structured insights, they frequently fail to capture real customer behaviour. Social desirability, hypothetical scenarios, and the gap between reported preferences and actual decisions create a misleading picture of customer needs and expectations.Take the case of an electronics company testing new colours for CD players in a focus group. Participants enthusiastically selected vibrant colours. But when they could take a CD player home, every participant chose the standard black or grey version.This example shows that what customers claim to want and what they actually choose, can be two different things. That is why the analysis of customer feedback needs to go beyond just listening to what people say. Pitfalls of NPS surveys Another example of customer feedback is Net Promoter Score. NPS is a widely used tool in customer feedback analysis, measuring satisfaction and loyalty. We strongly believe in the value of NPS, but only if customers share their scores voluntarily and without pressure.Sadly, many businesses aggressively push for high scores, sometimes even pressuring participants to rate them a 9 or 10, suggesting bonuses, commissions, and company targets depend on a great rating. This manipulation distorts the results and reduces the reliability of NPS. The real value of NPS lies in the follow-up question: Why did you give a certain score? A 3/10 rating without an explanation offers little actionable insight.Additionally, relying solely on NPS scores without any context can be misleading. A high score might indicate satisfaction, but does it correlate with repeat purchases or customer engagement? Similarly, a low score without behavioural context might not mean you’re losing customers. To get a clearer picture, effective customer feedback analysis should integrate multiple data sources. Beware of SINGLE-SOURCE FEEDBACK One of the most common mistakes in customer feedback analysis is relying on a single data source. Many companies base their insights exclusively on survey responses without integrating other crucial data points such as:Customer service interactionsPurchase history and brand engagementWebsite and app behaviourSocial media engagementFor example, consider survey participants who gave feedback but have not interacted with your brand in over a year. How valuable is their input? Without cross-referencing different sources, it is quite challenging to:Determine which research method is most appropriate for a specific questionIntegrate multiple insights into a coherent strategyTake meaningful action based on the feedbackPrioritise which feedback is truly valuableA thorough customer feedback analysis ensures that brands don’t make decisions based on isolated responses but instead consider real behavioural patterns and trends. "Customer feedback alone isn’t enough, you need behavioural data to truly understand what drives decisions and loyalty." Hans Palmers, Managing Partner and Digital & e-Commerce Strategist Keep it SIMPLE Traditional surveys often make respondents overthink their answers, leading to biased results. A smarter approach is to use implicit A/B testing or pairwise comparison to gather more reliable insights. For example: instead of asking customers to evaluate multiple options at once, you can present them two choices at a time. This makes decision-making easier and more instinctive while reducing the pressure of finding the ‘right’ answer. To much data, NOT ENOUGH INSIGHT The rise of new tools and methodologies, like eye tracking, behavioural data, feedback forms, and focus groups, has led to an overwhelming amount of data. However, more data does not automatically translate to better insights. Without structured customer feedback analysis, businesses often find themselves drowning in numbers without clear direction on how to act.The paradox is that in challenging times, the thirst for knowledge increases. Businesses collect more data in search of clarity, but all too often, they end up with noise instead of answers. That’s why it’s essential to refocus on the core of the brand and its true customer experience. "Collecting customer feedback is easy. The real challenge is turning it into meaningful actions." Hans Palmers, Managing Partner and Digital & e-Commerce Strategist How can June2O help you? At June20, we believe customer feedback can be a great source to drive growth. We like to focus on methodology and insight rather than just tools. The key to unlocking valuable feedback isn’t the tool you use, it’s in how you frame the question, interpret responses, and integrate findings into a broader strategy. Our approach ensures that:The right methodologies are applied to gather meaningful insightsData sources are integrated logically to form a complete pictureInsights are transformed into clear, actionable recommendations with maximum impactRather than getting lost in data overload, we help brands cut through the noise and make informed, strategic decisions.Want useful customer feedback? Let’s talk BiographyHans Palmers Hans began his professional journey as a Web Developer and Mentor. After three years, he joined TBWA, where he led a team in digital solutions for a decade. Following this, he shared his digital expertise at KUL, EHSAL and Thomas More for over ten years, as one of the founding partners. Embarking on a new venture, Hans founded Mundo Digitalis, specialised in digital solutions, and successfully led the agency for over 11 years. Over all these years, Hans did pioneering work in e-commerce and online banking. Recently Mundo Digitalis has integrated with June20, where Hans holds the position of Managing Partner & E-Commerce Strategist. The post Customer feedback analysis: understanding the hidden truth appeared first on June20. #customer #feedback #analysis #understanding #hidden
    JUNE20.BE
    Customer feedback analysis: understanding the hidden truth
    Customer feedback analysis: understanding the hidden truth Customer feedback seems to reflect preferences, but deeper analysis reveals a different reality. While tools like surveys and focus groups provide structured insights, they frequently fail to capture real customer behaviour. Social desirability, hypothetical scenarios, and the gap between reported preferences and actual decisions create a misleading picture of customer needs and expectations.Take the case of an electronics company testing new colours for CD players in a focus group. Participants enthusiastically selected vibrant colours. But when they could take a CD player home, every participant chose the standard black or grey version.This example shows that what customers claim to want and what they actually choose, can be two different things. That is why the analysis of customer feedback needs to go beyond just listening to what people say. Pitfalls of NPS surveys Another example of customer feedback is Net Promoter Score. NPS is a widely used tool in customer feedback analysis, measuring satisfaction and loyalty. We strongly believe in the value of NPS, but only if customers share their scores voluntarily and without pressure.Sadly, many businesses aggressively push for high scores, sometimes even pressuring participants to rate them a 9 or 10, suggesting bonuses, commissions, and company targets depend on a great rating. This manipulation distorts the results and reduces the reliability of NPS. The real value of NPS lies in the follow-up question: Why did you give a certain score? A 3/10 rating without an explanation offers little actionable insight.Additionally, relying solely on NPS scores without any context can be misleading. A high score might indicate satisfaction, but does it correlate with repeat purchases or customer engagement? Similarly, a low score without behavioural context might not mean you’re losing customers. To get a clearer picture, effective customer feedback analysis should integrate multiple data sources. Beware of SINGLE-SOURCE FEEDBACK One of the most common mistakes in customer feedback analysis is relying on a single data source. Many companies base their insights exclusively on survey responses without integrating other crucial data points such as:Customer service interactionsPurchase history and brand engagementWebsite and app behaviourSocial media engagementFor example, consider survey participants who gave feedback but have not interacted with your brand in over a year. How valuable is their input? Without cross-referencing different sources, it is quite challenging to:Determine which research method is most appropriate for a specific questionIntegrate multiple insights into a coherent strategyTake meaningful action based on the feedbackPrioritise which feedback is truly valuableA thorough customer feedback analysis ensures that brands don’t make decisions based on isolated responses but instead consider real behavioural patterns and trends. "Customer feedback alone isn’t enough, you need behavioural data to truly understand what drives decisions and loyalty." Hans Palmers, Managing Partner and Digital & e-Commerce Strategist Keep it SIMPLE Traditional surveys often make respondents overthink their answers, leading to biased results. A smarter approach is to use implicit A/B testing or pairwise comparison to gather more reliable insights. For example: instead of asking customers to evaluate multiple options at once, you can present them two choices at a time (pairwise comparison). This makes decision-making easier and more instinctive while reducing the pressure of finding the ‘right’ answer. To much data, NOT ENOUGH INSIGHT The rise of new tools and methodologies, like eye tracking, behavioural data, feedback forms, and focus groups, has led to an overwhelming amount of data. However, more data does not automatically translate to better insights. Without structured customer feedback analysis, businesses often find themselves drowning in numbers without clear direction on how to act.The paradox is that in challenging times, the thirst for knowledge increases. Businesses collect more data in search of clarity, but all too often, they end up with noise instead of answers. That’s why it’s essential to refocus on the core of the brand and its true customer experience. "Collecting customer feedback is easy. The real challenge is turning it into meaningful actions." Hans Palmers, Managing Partner and Digital & e-Commerce Strategist How can June2O help you? At June20, we believe customer feedback can be a great source to drive growth. We like to focus on methodology and insight rather than just tools. The key to unlocking valuable feedback isn’t the tool you use, it’s in how you frame the question, interpret responses, and integrate findings into a broader strategy. Our approach ensures that:The right methodologies are applied to gather meaningful insightsData sources are integrated logically to form a complete pictureInsights are transformed into clear, actionable recommendations with maximum impactRather than getting lost in data overload, we help brands cut through the noise and make informed, strategic decisions.Want useful customer feedback? Let’s talk BiographyHans Palmers Hans began his professional journey as a Web Developer and Mentor. After three years, he joined TBWA, where he led a team in digital solutions for a decade. Following this, he shared his digital expertise at KUL, EHSAL and Thomas More for over ten years, as one of the founding partners. Embarking on a new venture, Hans founded Mundo Digitalis, specialised in digital solutions, and successfully led the agency for over 11 years. Over all these years, Hans did pioneering work in e-commerce and online banking. Recently Mundo Digitalis has integrated with June20, where Hans holds the position of Managing Partner & E-Commerce Strategist. The post Customer feedback analysis: understanding the hidden truth appeared first on June20.
    0 Комментарии 0 Поделились 0 предпросмотр
  • What Statistics Can Tell Us About NBA Coaches

    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success?

    This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience.

    Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates?

    The second theory is that internal candidatesare often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample.

    This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python.

    The Data

    The codeand dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory. 

    A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations, the team will keep them around.

    Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found.

    In addition, the following variables were collected:

    VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC, Previous AC, College, Player, Management, and Foreign.YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5.

    First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0.

    from google.colab import drive
    drive.mountimport pandas as pd
    pd.set_option#Bring in the dataset
    coach = pd.read_csv.iloccoach= coach.map)
    coach

    This prints a preview of what the dataset looks like:

    In total, the dataset contains 221 coaching hires over this time. 

    Descriptive Statistics

    First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches.

    #Create chart of coaching background
    import matplotlib.pyplot as plt

    #Count number of coaches per category
    counts = coach.value_counts#Create chart
    plt.barplt.titleplt.figtextplt.xticksplt.ylabelplt.gca.spines.set_visibleplt.gca.spines.set_visiblefor i, value in enumerate:
    plt.text)*100,1)) + '%' + '+ ')', ha='center', fontsize=9)
    plt.savefigprint.sum/len)*100,1)) + " percent of coaches are internal.")

    Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks.

    Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram.

    #Create histogram
    plt.histplt.titleplt.figtextplt.annotate', xy=, xytext=,
    arrowprops=dict, fontsize=9, color='black')
    plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.showcoach.sort_values#Calculate some stats with the data
    import numpy as np

    print) + " years is the median coaching tenure length.")
    print.sum/len)*100,1)) + " percent of coaches last five years or less.")
    print.sum/len*100,1)) + " percent of coaches last a year or less.")

    Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons.

    This can also be viewed as a survival analysis plot to see the drop-off at various points in time:

    #Survival analysis
    import matplotlib.ticker as mtick

    lst = np.arangesurv = pd.DataFramesurv= np.nan

    for i in range):
    surv.iloc=.sum/lenplt.stepplt.titleplt.xlabel')
    plt.figtextplt.gca.yaxis.set_major_formatter)
    plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.show

    Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group.

    #Create a boxplot
    import seaborn as sns

    sns.boxplotplt.titleplt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.xlabelplt.xticksplt.figtextplt.savefigplt.show

    There are some differences between the groups. Aside from management hires, previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant.

    Statistical Analysis

    First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA:

    #ANOVA
    import statsmodels.api as sm
    from statsmodels.formula.api import ols

    am = ols+ C', data=coach).fitanova_table = sm.stats.anova_lmprintThe results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized. 

    However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early.

    To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons.

    With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coachso this observation is dropped from the analysis.

    #Logistic regression
    coach3 = coach<2020]

    coach3.loc= np.wherecoach_type_dummies = pd.get_dummies.astypecoach_type_dummies.dropcoach3 = pd.concat#Drop foreign category / David Blatt since n = 1
    coach3 = coach3.dropcoach3 = coach3.loc!= "David Blatt"]

    print)

    x = coach3]
    x = sm.add_constanty = coach3logm = sm.Logitlogm.r = logm.fitprint)

    #Convert coefficients to odds ratio
    print) + "is the odds ratio for internal.") #Internal coefficient
    print) #Management
    print) #Player
    print) #Previous AC
    print) #College

    Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story.

    The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows:

    Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches.

    From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant?

    The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coachesspecifically:

    # Filter to management

    manage = coach3== 1]
    print)
    printThe filtered dataset contains just 6 hires—of which just oneis classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist.

    With a p-value of 0.202, the Internal variable comes the closest to statistical significance. Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just onemet the criteria for success.

    Conclusion

    In conclusion, this analysis was able to draw several key conclusions:

    Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons.

    The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience.

    If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon.

    Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant.

    Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either.

    Note: All images were created by the author unless otherwise credited.
    The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science.
    #what #statistics #can #tell #about
    What Statistics Can Tell Us About NBA Coaches
    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success? This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience. Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates? The second theory is that internal candidatesare often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample. This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python. The Data The codeand dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory.  A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations, the team will keep them around. Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found. In addition, the following variables were collected: VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC, Previous AC, College, Player, Management, and Foreign.YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5. First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0. from google.colab import drive drive.mountimport pandas as pd pd.set_option#Bring in the dataset coach = pd.read_csv.iloccoach= coach.map) coach This prints a preview of what the dataset looks like: In total, the dataset contains 221 coaching hires over this time.  Descriptive Statistics First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches. #Create chart of coaching background import matplotlib.pyplot as plt #Count number of coaches per category counts = coach.value_counts#Create chart plt.barplt.titleplt.figtextplt.xticksplt.ylabelplt.gca.spines.set_visibleplt.gca.spines.set_visiblefor i, value in enumerate: plt.text)*100,1)) + '%' + '+ ')', ha='center', fontsize=9) plt.savefigprint.sum/len)*100,1)) + " percent of coaches are internal.") Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks. Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram. #Create histogram plt.histplt.titleplt.figtextplt.annotate', xy=, xytext=, arrowprops=dict, fontsize=9, color='black') plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.showcoach.sort_values#Calculate some stats with the data import numpy as np print) + " years is the median coaching tenure length.") print.sum/len)*100,1)) + " percent of coaches last five years or less.") print.sum/len*100,1)) + " percent of coaches last a year or less.") Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons. This can also be viewed as a survival analysis plot to see the drop-off at various points in time: #Survival analysis import matplotlib.ticker as mtick lst = np.arangesurv = pd.DataFramesurv= np.nan for i in range): surv.iloc=.sum/lenplt.stepplt.titleplt.xlabel') plt.figtextplt.gca.yaxis.set_major_formatter) plt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.savefigplt.show Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group. #Create a boxplot import seaborn as sns sns.boxplotplt.titleplt.gca.spines.set_visibleplt.gca.spines.set_visibleplt.xlabelplt.xticksplt.figtextplt.savefigplt.show There are some differences between the groups. Aside from management hires, previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant. Statistical Analysis First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA: #ANOVA import statsmodels.api as sm from statsmodels.formula.api import ols am = ols+ C', data=coach).fitanova_table = sm.stats.anova_lmprintThe results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized.  However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early. To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons. With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coachso this observation is dropped from the analysis. #Logistic regression coach3 = coach<2020] coach3.loc= np.wherecoach_type_dummies = pd.get_dummies.astypecoach_type_dummies.dropcoach3 = pd.concat#Drop foreign category / David Blatt since n = 1 coach3 = coach3.dropcoach3 = coach3.loc!= "David Blatt"] print) x = coach3] x = sm.add_constanty = coach3logm = sm.Logitlogm.r = logm.fitprint) #Convert coefficients to odds ratio print) + "is the odds ratio for internal.") #Internal coefficient print) #Management print) #Player print) #Previous AC print) #College Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story. The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows: Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches. From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant? The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coachesspecifically: # Filter to management manage = coach3== 1] print) printThe filtered dataset contains just 6 hires—of which just oneis classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist. With a p-value of 0.202, the Internal variable comes the closest to statistical significance. Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just onemet the criteria for success. Conclusion In conclusion, this analysis was able to draw several key conclusions: Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons. The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience. If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon. Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant. Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either. Note: All images were created by the author unless otherwise credited. The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science. #what #statistics #can #tell #about
    TOWARDSDATASCIENCE.COM
    What Statistics Can Tell Us About NBA Coaches
    Who gets hired as an NBA coach? How long does a typical coach last? And does their coaching background play any part in predicting success? This analysis was inspired by several key theories. First, there has been a common criticism among casual NBA fans that teams overly prefer hiring candidates with previous NBA head coaches experience. Consequently, this analysis aims to answer two related questions. First, is it true that NBA teams frequently re-hire candidates with previous head coaching experience? And second, is there any evidence that these candidates under-perform relative to other candidates? The second theory is that internal candidates (though infrequently hired) are often more successful than external candidates. This theory was derived from a pair of anecdotes. Two of the most successful coaches in NBA history, Gregg Popovich of San Antonio and Erik Spoelstra of Miami, were both internal hires. However, rigorous quantitative evidence is needed to test if this relationship holds over a larger sample. This analysis aims to explore these questions, and provide the code to reproduce the analysis in Python. The Data The code (contained in a Jupyter notebook) and dataset for this project are available on Github here. The analysis was performed using Python in Google Colaboratory.  A prerequisite to this analysis was determining a way to measure coaching success quantitatively. I decided on a simple idea: the success of a coach would be best measured by the length of their tenure in that job. Tenure best represents the differing expectations that might be placed on a coach. A coach hired to a contending team would be expected to win games and generate deep playoff runs. A coach hired to a rebuilding team might be judged on the development of younger players and their ability to build a strong culture. If a coach meets expectations (whatever those may be), the team will keep them around. Since there was no existing dataset with all of the required data, I collected the data myself from Wikipedia. I recorded every off-season coaching change from 1990 through 2021. Since the primary outcome variable is tenure, in-season coaching changes were excluded since these coaches often carried an “interim” tag—meaning they were intended to be temporary until a permanent replacement could be found. In addition, the following variables were collected: VariableDefinitionTeamThe NBA team the coach was hired forYearThe year the coach was hiredCoachThe name of the coachInternal?An indicator if the coach was internal or not—meaning they worked for the organization in some capacity immediately prior to being hired as head coachTypeThe background of the coach. Categories are Previous HC (prior NBA head coaching experience), Previous AC (prior NBA assistant coaching experience, but no head coaching experience), College (head coach of a college team), Player (a former NBA player with no coaching experience), Management (someone with front office experience but no coaching experience), and Foreign (someone coaching outside of North America with no NBA coaching experience).YearsThe number of years a coach was employed in the role. For coaches fired mid-season, the value was counted as 0.5. First, the dataset is imported from its location in Google Drive. I also convert ‘Internal?’ into a dummy variable, replacing “Yes” with 1 and “No” with 0. from google.colab import drive drive.mount('/content/drive') import pandas as pd pd.set_option('display.max_columns', None) #Bring in the dataset coach = pd.read_csv('/content/drive/MyDrive/Python_Files/Coaches.csv', on_bad_lines = 'skip').iloc[:,0:6] coach['Internal'] = coach['Internal?'].map(dict(Yes=1, No=0)) coach This prints a preview of what the dataset looks like: In total, the dataset contains 221 coaching hires over this time.  Descriptive Statistics First, basic summary Statistics are calculated and visualized to determine the backgrounds of NBA head coaches. #Create chart of coaching background import matplotlib.pyplot as plt #Count number of coaches per category counts = coach['Type'].value_counts() #Create chart plt.bar(counts.index, counts.values, color = 'blue', edgecolor = 'black') plt.title('Where Do NBA Coaches Come From?') plt.figtext(0.76, -0.1, "Made by Brayden Gerrard", ha="center") plt.xticks(rotation = 45) plt.ylabel('Number of Coaches') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) for i, value in enumerate(counts.values): plt.text(i, value + 1, str(round((value/sum(counts.values))*100,1)) + '%' + ' (' + str(value) + ')', ha='center', fontsize=9) plt.savefig('coachtype.png', bbox_inches = 'tight') print(str(round(((coach['Internal'] == 1).sum()/len(coach))*100,1)) + " percent of coaches are internal.") Over half of coaching hires previously served as an NBA head coach, and nearly 90% had NBA coaching experience of some kind. This answers the first question posed—NBA teams show a strong preference for experienced head coaches. If you get hired once as an NBA coach, your odds of being hired again are much higher. Additionally, 13.6% of hires are internal, confirming that teams do not frequently hire from their own ranks. Second, I will explore the typical tenure of an NBA head coach. This can be visualized using a histogram. #Create histogram plt.hist(coach['Years'], bins =12, edgecolor = 'black', color = 'blue') plt.title('Distribution of Coaching Tenure') plt.figtext(0.76, 0, "Made by Brayden Gerrard", ha="center") plt.annotate('Erik Spoelstra (MIA)', xy=(16.4, 2), xytext=(14 + 1, 15), arrowprops=dict(facecolor='black', shrink=0.1), fontsize=9, color='black') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.savefig('tenurehist.png', bbox_inches = 'tight') plt.show() coach.sort_values('Years', ascending = False) #Calculate some stats with the data import numpy as np print(str(np.median(coach['Years'])) + " years is the median coaching tenure length.") print(str(round(((coach['Years'] <= 5).sum()/len(coach))*100,1)) + " percent of coaches last five years or less.") print(str(round((coach['Years'] <= 1).sum()/len(coach)*100,1)) + " percent of coaches last a year or less.") Using tenure as an indicator of success, the the data clearly shows that the large majority of coaches are unsuccessful. The median tenure is just 2.5 seasons. 18.1% of coaches last a single season or less, and barely 10% of coaches last more than 5 seasons. This can also be viewed as a survival analysis plot to see the drop-off at various points in time: #Survival analysis import matplotlib.ticker as mtick lst = np.arange(0,18,0.5) surv = pd.DataFrame(lst, columns = ['Period']) surv['Number'] = np.nan for i in range(0,len(surv)): surv.iloc[i,1] = (coach['Years'] >= surv.iloc[i,0]).sum()/len(coach) plt.step(surv['Period'],surv['Number']) plt.title('NBA Coach Survival Rate') plt.xlabel('Coaching Tenure (Years)') plt.figtext(0.76, -0.05, "Made by Brayden Gerrard", ha="center") plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter(1)) plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.savefig('coachsurvival.png', bbox_inches = 'tight') plt.show Lastly, a box plot can be generated to see if there are any obvious differences in tenure based on coaching type. Boxplots also display outliers for each group. #Create a boxplot import seaborn as sns sns.boxplot(data=coach, x='Type', y='Years') plt.title('Coaching Tenure by Coach Type') plt.gca().spines['top'].set_visible(False) plt.gca().spines['right'].set_visible(False) plt.xlabel('') plt.xticks(rotation = 30, ha = 'right') plt.figtext(0.76, -0.1, "Made by Brayden Gerrard", ha="center") plt.savefig('coachtypeboxplot.png', bbox_inches = 'tight') plt.show There are some differences between the groups. Aside from management hires (which have a sample of just six), previous head coaches have the longest average tenure at 3.3 years. However, since many of the groups have small sample sizes, we need to use more advanced techniques to test if the differences are statistically significant. Statistical Analysis First, to test if either Type or Internal has a statistically significant difference among the group means, we can use ANOVA: #ANOVA import statsmodels.api as sm from statsmodels.formula.api import ols am = ols('Years ~ C(Type) + C(Internal)', data=coach).fit() anova_table = sm.stats.anova_lm(am, typ=2) print(anova_table) The results show high p-values and low F-stats—indicating no evidence of statistically significant difference in means. Thus, the initial conclusion is that there is no evidence NBA teams are under-valuing internal candidates or over-valuing previous head coaching experience as initially hypothesized.  However, there is a possible distortion when comparing group averages. NBA coaches are signed to contracts that typically run between three and five years. Teams typically have to pay out the remainder of the contract even if coaches are dismissed early for poor performance. A coach that lasts two years may be no worse than one that lasts three or four years—the difference could simply be attributable to the length and terms of the initial contract, which is in turn impacted by the desirability of the coach in the job market. Since coaches with prior experience are highly coveted, they may use that leverage to negotiate longer contracts and/or higher salaries, both of which could deter teams from terminating their employment too early. To account for this possibility, the outcome can be treated as binary rather than continuous. If a coach lasted more than 5 seasons, it is highly likely they completed at least their initial contract term and the team chose to extend or re-sign them. These coaches will be treated as successes, with those having a tenure of five years or less categorized as unsuccessful. To run this analysis, all coaching hires from 2020 and 2021 must be excluded, since they have not yet been able to eclipse 5 seasons. With a binary dependent variable, a logistic regression can be used to test if any of the variables predict coaching success. Internal and Type are both converted to dummy variables. Since previous head coaches represent the most common coaching hires, I set this as the “reference” category against which the others will be measured against. Additionally, the dataset contains just one foreign-hired coach (David Blatt) so this observation is dropped from the analysis. #Logistic regression coach3 = coach[coach['Year']<2020] coach3.loc[:, 'Success'] = np.where(coach3['Years'] > 5, 1, 0) coach_type_dummies = pd.get_dummies(coach3['Type'], prefix = 'Type').astype(int) coach_type_dummies.drop(columns=['Type_Previous HC'], inplace=True) coach3 = pd.concat([coach3, coach_type_dummies], axis = 1) #Drop foreign category / David Blatt since n = 1 coach3 = coach3.drop(columns=['Type_Foreign']) coach3 = coach3.loc[coach3['Coach'] != "David Blatt"] print(coach3['Success'].value_counts()) x = coach3[['Internal','Type_Management','Type_Player','Type_Previous AC', 'Type_College']] x = sm.add_constant(x) y = coach3['Success'] logm = sm.Logit(y,x) logm.r = logm.fit(maxiter=1000) print(logm.r.summary()) #Convert coefficients to odds ratio print(str(np.exp(-1.4715)) + "is the odds ratio for internal.") #Internal coefficient print(np.exp(1.0025)) #Management print(np.exp(-39.6956)) #Player print(np.exp(-0.3626)) #Previous AC print(np.exp(-0.6901)) #College Consistent with ANOVA results, none of the variables are statistically significant under any conventional threshold. However, closer examination of the coefficients tells an interesting story. The beta coefficients represent the change in the log-odds of the outcome. Since this is unintuitive to interpret, the coefficients can be converted to an Odds Ratio as follows: Internal has an odds ratio of 0.23—indicating that internal candidates are 77% less likely to be successful compared to external candidates. Management has an odds ratio of 2.725, indicating these candidates are 172.5% more likely to be successful. The odds ratios for players is effectively zero, 0.696 for previous assistant coaches, and 0.5 for college coaches. Since three out of four coaching type dummy variables have an odds ratio under one, this indicates that only management hires were more likely to be successful than previous head coaches. From a practical standpoint, these are large effect sizes. So why are the variables statistically insignificant? The cause is a limited sample size of successful coaches. Out of 202 coaches remaining in the sample, just 23 (11.4%) were successful. Regardless of the coach’s background, odds are low they last more than a few seasons. If we look at the one category able to outperform previous head coaches (management hires) specifically: # Filter to management manage = coach3[coach3['Type_Management'] == 1] print(manage['Success'].value_counts()) print(manage) The filtered dataset contains just 6 hires—of which just one (Steve Kerr with Golden State) is classified as a success. In other words, the entire effect was driven by a single successful observation. Thus, it would take a considerably larger sample size to be confident if differences exist. With a p-value of 0.202, the Internal variable comes the closest to statistical significance (though it still falls well short of a typical alpha of 0.05). Notably, however, the direction of the effect is actually the opposite of what was hypothesized—internal hires are less likely to be successful than external hires. Out of 26 internal hires, just one (Erik Spoelstra of Miami) met the criteria for success. Conclusion In conclusion, this analysis was able to draw several key conclusions: Regardless of background, being an NBA coach is typically a short-lived job. It’s rare for a coach to last more than a few seasons. The common wisdom that NBA teams strongly prefer to hire previous head coaches holds true. More than half of hires already had NBA head coaching experience. If teams don’t hire an experienced head coach, they’re likely to hire an NBA assistant coach. Hires outside of these two categories are especially uncommon. Though they are frequently hired, there is no evidence to suggest NBA teams overly prioritize previous head coaches. To the contrary, previous head coaches stay in the job longer on average and are more likely to outlast their initial contract term—though neither of these differences are statistically significant. Despite high-profile anecdotes, there is no evidence to suggest that internal hires are more successful than external hires either. Note: All images were created by the author unless otherwise credited. The post What Statistics Can Tell Us About NBA Coaches appeared first on Towards Data Science.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Optimizing Multi-Objective Problems with Desirability Functions

    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution.

    While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization.

    In this article, we’ll explore:

    The mathematical foundation of desirability functions

    How to implement these functions in Python

    How to optimize a multi-objective problem with desirability functions

    Visualization for interpretation and explanation of the results

    To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices.

    By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub.

    What are Desirability Functions?

    Desirability functions were first formalized by Harringtonand later extended by Derringer and Suich. The idea is to:

    Transform each response into a performance score between 0and 1Combine all scores into a single metric to maximize

    Let’s explore the types of desirability functions and then how we can combine all the scores.

    The different types of desirability functions

    There are three different desirability functions, that would allow to handle many situations.

    Smaller-is-better: Used when minimizing a response is desirable

    def desirability_smaller_is_better-> float:
    """Calculate desirability function value where smaller values are better.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x <= x_min:
    return 1.0
    elif x >= x_max:
    return 0.0
    else:
    return/Larger-is-better: Used when maximizing a response is desirable

    def desirability_larger_is_better-> float:
    """Calculate desirability function value where larger values are better.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x <= x_min:
    return 0.0
    elif x >= x_max:
    return 1.0
    else:
    return/Target-is-best: Used when a specific target value is optimal

    def desirability_target_is_best-> float:
    """Calculate two-sided desirability function value with target value.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_target: Targetvalue
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x_min <= x <= x_target:
    return/elif x_target < x <= x_max:
    return/else:
    return 0.0

    Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score.

    Combining Desirability Scores

    Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean:

    Where di are individual desirability values and wi are weights reflecting the relative importance of each metric.

    The geometric mean has an important property: if any single desirability is 0, the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent.

    def overall_desirability:
    """Compute overall desirability using geometric mean

    Parameters:
    -----------
    desirabilities : list
    Individual desirability scores
    weights : list
    Weights for each desirability

    Returns:
    --------
    float
    Overall desirability score
    """
    if weights is None:
    weights =* len# Convert to numpy arrays
    d = np.arrayw = np.array# Calculate geometric mean
    return np.prod**)

    The weights are hyperparameters that give leverage on the final outcome and give room for customization.

    A Practical Optimization Example: Bread Baking

    To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem.

    The Parameters and Quality Metrics

    Let’s play with the following parameters:

    Fermentation TimeFermentation TemperatureHydration LevelKneading TimeBaking TemperatureAnd let’s try to optimize these metrics:

    Texture Quality: The texture of the bread

    Flavor Profile: The flavor of the bread

    Practicality: The practicality of the whole process

    Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics. 

    For each quality metric, we need to define how parameters influence it:

    def compute_flavor_profile-> float:
    """Compute flavor profile score based on input parameters.

    Args:
    params: List of parameter valuesReturns:
    Weighted flavor profile score between 0 and 1
    """
    # Flavor mainly affected by fermentation parameters
    fermentation_d = desirability_larger_is_betterferment_temp_d = desirability_target_is_besthydration_d = desirability_target_is_best# Baking temperature has minimal effect on flavor
    weights =return np.averageHere for example, the flavor is influenced by the following:

    The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes

    The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius

    The hydration, with a maximum desirability peaking at 75% humidity

    These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality.

    The Objective Function

    Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously:

    def objective_function-> float:
    """Compute overall desirability score based on individual quality metrics.

    Args:
    params: List of parameter values
    weights: Weights for texture, flavor and practicality scores

    Returns:
    Negative overall desirability score"""
    # Compute individual desirability scores
    texture = compute_texture_qualityflavor = compute_flavor_profilepracticality = compute_practicality# Ensure weights sum up to one
    weights = np.array/ np.sum# Calculate overall desirability using geometric mean
    overall_d = overall_desirability# Return negative value since we want to maximize desirability
    # but optimization functions typically minimize
    return -overall_d

    After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized.

    Optimization with SciPy

    We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability:

    def optimize-> list:
    # Define parameter bounds
    bounds = {
    'fermentation_time':,
    'fermentation_temp':,
    'hydration_level':,
    'kneading_time':,
    'baking_temp':}

    # Initial guessx0 =# Run optimization
    result = minimize,
    bounds=list),
    method='SLSQP'
    )

    return result.x

    In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned. 

    The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture.

    Let’s now visualize the results for a few sets of weights.

    Visualization of Results

    Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights.

    Let’s have a look at the results in case of weights favoring practicality:

    Optimized parameters with weights favoring practicality. Image by author.

    With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality.

    Now, if we optimize with an emphasis on texture, we have slightly different results:

    Optimized parameters with weights favoring texture. Image by author.

    In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality. 

    Conclusion: Practical Applications of Desirability Functions

    While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization.

    Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives.

    The key advantages of this approach include:

    Standardized scales that make different metrics comparable and easy to combine into a single target

    Flexibility to handle different types of objectives: minimize, maximize, target

    Clear communication of preferences through mathematical functions

    The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives.
    The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science.
    #optimizing #multiobjective #problems #with #desirability
    Optimizing Multi-Objective Problems with Desirability Functions
    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution. While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization. In this article, we’ll explore: The mathematical foundation of desirability functions How to implement these functions in Python How to optimize a multi-objective problem with desirability functions Visualization for interpretation and explanation of the results To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices. By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub. What are Desirability Functions? Desirability functions were first formalized by Harringtonand later extended by Derringer and Suich. The idea is to: Transform each response into a performance score between 0and 1Combine all scores into a single metric to maximize Let’s explore the types of desirability functions and then how we can combine all the scores. The different types of desirability functions There are three different desirability functions, that would allow to handle many situations. Smaller-is-better: Used when minimizing a response is desirable def desirability_smaller_is_better-> float: """Calculate desirability function value where smaller values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 1.0 elif x >= x_max: return 0.0 else: return/Larger-is-better: Used when maximizing a response is desirable def desirability_larger_is_better-> float: """Calculate desirability function value where larger values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 0.0 elif x >= x_max: return 1.0 else: return/Target-is-best: Used when a specific target value is optimal def desirability_target_is_best-> float: """Calculate two-sided desirability function value with target value. Args: x: Input parameter value x_min: Minimum acceptable value x_target: Targetvalue x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x_min <= x <= x_target: return/elif x_target < x <= x_max: return/else: return 0.0 Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score. Combining Desirability Scores Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean: Where di are individual desirability values and wi are weights reflecting the relative importance of each metric. The geometric mean has an important property: if any single desirability is 0, the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent. def overall_desirability: """Compute overall desirability using geometric mean Parameters: ----------- desirabilities : list Individual desirability scores weights : list Weights for each desirability Returns: -------- float Overall desirability score """ if weights is None: weights =* len# Convert to numpy arrays d = np.arrayw = np.array# Calculate geometric mean return np.prod**) The weights are hyperparameters that give leverage on the final outcome and give room for customization. A Practical Optimization Example: Bread Baking To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem. The Parameters and Quality Metrics Let’s play with the following parameters: Fermentation TimeFermentation TemperatureHydration LevelKneading TimeBaking TemperatureAnd let’s try to optimize these metrics: Texture Quality: The texture of the bread Flavor Profile: The flavor of the bread Practicality: The practicality of the whole process Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics.  For each quality metric, we need to define how parameters influence it: def compute_flavor_profile-> float: """Compute flavor profile score based on input parameters. Args: params: List of parameter valuesReturns: Weighted flavor profile score between 0 and 1 """ # Flavor mainly affected by fermentation parameters fermentation_d = desirability_larger_is_betterferment_temp_d = desirability_target_is_besthydration_d = desirability_target_is_best# Baking temperature has minimal effect on flavor weights =return np.averageHere for example, the flavor is influenced by the following: The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius The hydration, with a maximum desirability peaking at 75% humidity These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality. The Objective Function Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously: def objective_function-> float: """Compute overall desirability score based on individual quality metrics. Args: params: List of parameter values weights: Weights for texture, flavor and practicality scores Returns: Negative overall desirability score""" # Compute individual desirability scores texture = compute_texture_qualityflavor = compute_flavor_profilepracticality = compute_practicality# Ensure weights sum up to one weights = np.array/ np.sum# Calculate overall desirability using geometric mean overall_d = overall_desirability# Return negative value since we want to maximize desirability # but optimization functions typically minimize return -overall_d After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized. Optimization with SciPy We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability: def optimize-> list: # Define parameter bounds bounds = { 'fermentation_time':, 'fermentation_temp':, 'hydration_level':, 'kneading_time':, 'baking_temp':} # Initial guessx0 =# Run optimization result = minimize, bounds=list), method='SLSQP' ) return result.x In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned.  The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture. Let’s now visualize the results for a few sets of weights. Visualization of Results Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights. Let’s have a look at the results in case of weights favoring practicality: Optimized parameters with weights favoring practicality. Image by author. With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality. Now, if we optimize with an emphasis on texture, we have slightly different results: Optimized parameters with weights favoring texture. Image by author. In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality.  Conclusion: Practical Applications of Desirability Functions While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization. Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives. The key advantages of this approach include: Standardized scales that make different metrics comparable and easy to combine into a single target Flexibility to handle different types of objectives: minimize, maximize, target Clear communication of preferences through mathematical functions The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives. The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science. #optimizing #multiobjective #problems #with #desirability
    TOWARDSDATASCIENCE.COM
    Optimizing Multi-Objective Problems with Desirability Functions
    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution. While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization. In this article, we’ll explore: The mathematical foundation of desirability functions How to implement these functions in Python How to optimize a multi-objective problem with desirability functions Visualization for interpretation and explanation of the results To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices. By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub. What are Desirability Functions? Desirability functions were first formalized by Harrington (1965) and later extended by Derringer and Suich (1980). The idea is to: Transform each response into a performance score between 0 (absolutely unacceptable) and 1 (the ideal value) Combine all scores into a single metric to maximize Let’s explore the types of desirability functions and then how we can combine all the scores. The different types of desirability functions There are three different desirability functions, that would allow to handle many situations. Smaller-is-better: Used when minimizing a response is desirable def desirability_smaller_is_better(x: float, x_min: float, x_max: float) -> float: """Calculate desirability function value where smaller values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 1.0 elif x >= x_max: return 0.0 else: return (x_max - x) / (x_max - x_min) Larger-is-better: Used when maximizing a response is desirable def desirability_larger_is_better(x: float, x_min: float, x_max: float) -> float: """Calculate desirability function value where larger values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 0.0 elif x >= x_max: return 1.0 else: return (x - x_min) / (x_max - x_min) Target-is-best: Used when a specific target value is optimal def desirability_target_is_best(x: float, x_min: float, x_target: float, x_max: float) -> float: """Calculate two-sided desirability function value with target value. Args: x: Input parameter value x_min: Minimum acceptable value x_target: Target (optimal) value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x_min <= x <= x_target: return (x - x_min) / (x_target - x_min) elif x_target < x <= x_max: return (x_max - x) / (x_max - x_target) else: return 0.0 Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score. Combining Desirability Scores Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean: Where di are individual desirability values and wi are weights reflecting the relative importance of each metric. The geometric mean has an important property: if any single desirability is 0 (i.e. completely unacceptable), the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent. def overall_desirability(desirabilities, weights=None): """Compute overall desirability using geometric mean Parameters: ----------- desirabilities : list Individual desirability scores weights : list Weights for each desirability Returns: -------- float Overall desirability score """ if weights is None: weights = [1] * len(desirabilities) # Convert to numpy arrays d = np.array(desirabilities) w = np.array(weights) # Calculate geometric mean return np.prod(d ** w) ** (1 / np.sum(w)) The weights are hyperparameters that give leverage on the final outcome and give room for customization. A Practical Optimization Example: Bread Baking To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem. The Parameters and Quality Metrics Let’s play with the following parameters: Fermentation Time (30–180 minutes) Fermentation Temperature (20–30°C) Hydration Level (60–85%) Kneading Time (0–20 minutes) Baking Temperature (180–250°C) And let’s try to optimize these metrics: Texture Quality: The texture of the bread Flavor Profile: The flavor of the bread Practicality: The practicality of the whole process Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics.  For each quality metric, we need to define how parameters influence it: def compute_flavor_profile(params: List[float]) -> float: """Compute flavor profile score based on input parameters. Args: params: List of parameter values [fermentation_time, ferment_temp, hydration, kneading_time, baking_temp] Returns: Weighted flavor profile score between 0 and 1 """ # Flavor mainly affected by fermentation parameters fermentation_d = desirability_larger_is_better(params[0], 30, 180) ferment_temp_d = desirability_target_is_best(params[1], 20, 24, 28) hydration_d = desirability_target_is_best(params[2], 65, 75, 85) # Baking temperature has minimal effect on flavor weights = [0.5, 0.3, 0.2] return np.average([fermentation_d, ferment_temp_d, hydration_d], weights=weights) Here for example, the flavor is influenced by the following: The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius The hydration, with a maximum desirability peaking at 75% humidity These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality. The Objective Function Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously: def objective_function(params: List[float], weights: List[float]) -> float: """Compute overall desirability score based on individual quality metrics. Args: params: List of parameter values weights: Weights for texture, flavor and practicality scores Returns: Negative overall desirability score (for minimization) """ # Compute individual desirability scores texture = compute_texture_quality(params) flavor = compute_flavor_profile(params) practicality = compute_practicality(params) # Ensure weights sum up to one weights = np.array(weights) / np.sum(weights) # Calculate overall desirability using geometric mean overall_d = overall_desirability([texture, flavor, practicality], weights) # Return negative value since we want to maximize desirability # but optimization functions typically minimize return -overall_d After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized. Optimization with SciPy We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability: def optimize(weights: list[float]) -> list[float]: # Define parameter bounds bounds = { 'fermentation_time': (1, 24), 'fermentation_temp': (20, 30), 'hydration_level': (60, 85), 'kneading_time': (0, 20), 'baking_temp': (180, 250) } # Initial guess (middle of bounds) x0 = [(b[0] + b[1]) / 2 for b in bounds.values()] # Run optimization result = minimize( objective_function, x0, args=(weights,), bounds=list(bounds.values()), method='SLSQP' ) return result.x In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned.  The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture. Let’s now visualize the results for a few sets of weights. Visualization of Results Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights. Let’s have a look at the results in case of weights favoring practicality: Optimized parameters with weights favoring practicality. Image by author. With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality. Now, if we optimize with an emphasis on texture, we have slightly different results: Optimized parameters with weights favoring texture. Image by author. In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality.  Conclusion: Practical Applications of Desirability Functions While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization. Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives. The key advantages of this approach include: Standardized scales that make different metrics comparable and easy to combine into a single target Flexibility to handle different types of objectives: minimize, maximize, target Clear communication of preferences through mathematical functions The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives. The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science.
    0 Комментарии 0 Поделились 0 предпросмотр
  • The case for cooperation 

    The Ford Pinto. New Coke. Google Glass. History is littered with products whose fatal flaw— whether failures of safety, privacy, performance, or plain old desirability—repelled consumers and inflicted reputational damage to the companies bringing them to market. 

    It’s easy to imagine the difference if these problems had been detected early on. And too often, businesses neglect the chance to work with nonprofits, social enterprises, and other public interest groups to make product improvements after they enter the marketplace or, more ideally, “upstream,” before their products have entered the crucible of the customer. 

    For companies and consumer groups alike, this is a major missed opportunity. In an increasingly competitive marketplace, partnering with public interest groups to bake an authentic pro-consumer perspective into elements like design, safety, sustainability, and functionality can provide a coveted advantage. It gives a product the chance to stand out from the crowd, already destined for glowing reviews because problems were nipped in the bud thanks to guidance and data from those focused on consumers’ interests. And for the nonprofits, working proactively with businesses to help ensure that products reflect consumers’ values from the outset means a better, safer marketplace for everyone. 

    Zoom, in a nutshell 

    We’ve already seen the difference working together can make, especially if it’s early in a product’s introduction to consumers. Just look at Zoom. The videoconferencing platform, while launched as a tool for businesses, had not been introduced to a wide consumer audience before the COVID-19 pandemic made its services a global necessity. In early 2020—as Zoom was poised to explode from 10 million monthly users to more than 300 million by April—Consumer Reports’testing experts went under the hood in our digital lab to assess it from a consumer well-being perspective. 

    CR uncovered serious flaws. These included a protocol allowing the company to collect users’ videos, call transcripts, and chats and use them for targeted advertising, as well as features that allowed hosts to record meetings in secret and alert them when a participant clicked away from the screen. At the precipice of a moment when elementary school classrooms to therapy sessions would be conducted over Zoom, there’s no telling what the fallout might have been—for the company or its customers—had these problems persisted. 

    But CR reached out to the business—and the business reached back. Within days, Zoom had worked with CR to solve a wide array of problems, helping strengthen its case as a lifeline for users all over the world. 

    Partnerships require new ways of thinking  

    Now imagine what could be possible if such a partnership began even earlier in the process. This is the relationship CR has worked to build with businesses, providing companies our testing expertise and data about consumers’ needs and desires. Our advisory services have led to us providing feedback on prototypes, and with feedback implemented earlier in the product development lifecycle, we’ve seen immediate impact for consumers: improved comfort of leg support in vehicles; privacy policy changes for electronics; reduced fees for a basic checking account; an improved washing machine drying algorithm for one brand; improved safety of active driver assistance systems; and strengthened digital payments app scam warnings before users finalize transactions. These partnerships have proven productive, but they remain the exception to the rule. 

    Building more of those cooperative, upstream relationships will require new thinking on both sides. Advocacy organizations must adopt an entrepreneurial spirit, leveraging their insights and expertise as a collaborator to companies they’re more accustomed to critiquing. Businesses must embrace these relationships as a central part of their research and development process, understanding that embedding pro-consumer values gives them a real edge in today’s hyper-social marketplace. 

    This cooperation is especially important in the modern digital era, when many consumers are making choices that reflect their principles and where products and services are growing increasingly complex. As the rise of AI-fueled products brings a new wave of threats and vulnerabilities in its wake, it is critical that businesses and public interest groups make an effort to forge strong relationships. 

    By coming together early and often around their common interest—the consumer—they can improve products, craft strong industry standards, burnish the reputation of companies that act responsibly, and help maintain the health and integrity of the marketplace. 

    Phil Radford is president and CEO of Consumer Reports. 
    #case #cooperation
    The case for cooperation 
    The Ford Pinto. New Coke. Google Glass. History is littered with products whose fatal flaw— whether failures of safety, privacy, performance, or plain old desirability—repelled consumers and inflicted reputational damage to the companies bringing them to market.  It’s easy to imagine the difference if these problems had been detected early on. And too often, businesses neglect the chance to work with nonprofits, social enterprises, and other public interest groups to make product improvements after they enter the marketplace or, more ideally, “upstream,” before their products have entered the crucible of the customer.  For companies and consumer groups alike, this is a major missed opportunity. In an increasingly competitive marketplace, partnering with public interest groups to bake an authentic pro-consumer perspective into elements like design, safety, sustainability, and functionality can provide a coveted advantage. It gives a product the chance to stand out from the crowd, already destined for glowing reviews because problems were nipped in the bud thanks to guidance and data from those focused on consumers’ interests. And for the nonprofits, working proactively with businesses to help ensure that products reflect consumers’ values from the outset means a better, safer marketplace for everyone.  Zoom, in a nutshell  We’ve already seen the difference working together can make, especially if it’s early in a product’s introduction to consumers. Just look at Zoom. The videoconferencing platform, while launched as a tool for businesses, had not been introduced to a wide consumer audience before the COVID-19 pandemic made its services a global necessity. In early 2020—as Zoom was poised to explode from 10 million monthly users to more than 300 million by April—Consumer Reports’testing experts went under the hood in our digital lab to assess it from a consumer well-being perspective.  CR uncovered serious flaws. These included a protocol allowing the company to collect users’ videos, call transcripts, and chats and use them for targeted advertising, as well as features that allowed hosts to record meetings in secret and alert them when a participant clicked away from the screen. At the precipice of a moment when elementary school classrooms to therapy sessions would be conducted over Zoom, there’s no telling what the fallout might have been—for the company or its customers—had these problems persisted.  But CR reached out to the business—and the business reached back. Within days, Zoom had worked with CR to solve a wide array of problems, helping strengthen its case as a lifeline for users all over the world.  Partnerships require new ways of thinking   Now imagine what could be possible if such a partnership began even earlier in the process. This is the relationship CR has worked to build with businesses, providing companies our testing expertise and data about consumers’ needs and desires. Our advisory services have led to us providing feedback on prototypes, and with feedback implemented earlier in the product development lifecycle, we’ve seen immediate impact for consumers: improved comfort of leg support in vehicles; privacy policy changes for electronics; reduced fees for a basic checking account; an improved washing machine drying algorithm for one brand; improved safety of active driver assistance systems; and strengthened digital payments app scam warnings before users finalize transactions. These partnerships have proven productive, but they remain the exception to the rule.  Building more of those cooperative, upstream relationships will require new thinking on both sides. Advocacy organizations must adopt an entrepreneurial spirit, leveraging their insights and expertise as a collaborator to companies they’re more accustomed to critiquing. Businesses must embrace these relationships as a central part of their research and development process, understanding that embedding pro-consumer values gives them a real edge in today’s hyper-social marketplace.  This cooperation is especially important in the modern digital era, when many consumers are making choices that reflect their principles and where products and services are growing increasingly complex. As the rise of AI-fueled products brings a new wave of threats and vulnerabilities in its wake, it is critical that businesses and public interest groups make an effort to forge strong relationships.  By coming together early and often around their common interest—the consumer—they can improve products, craft strong industry standards, burnish the reputation of companies that act responsibly, and help maintain the health and integrity of the marketplace.  Phil Radford is president and CEO of Consumer Reports.  #case #cooperation
    WWW.FASTCOMPANY.COM
    The case for cooperation 
    The Ford Pinto. New Coke. Google Glass. History is littered with products whose fatal flaw— whether failures of safety, privacy, performance, or plain old desirability—repelled consumers and inflicted reputational damage to the companies bringing them to market.  It’s easy to imagine the difference if these problems had been detected early on. And too often, businesses neglect the chance to work with nonprofits, social enterprises, and other public interest groups to make product improvements after they enter the marketplace or, more ideally, “upstream,” before their products have entered the crucible of the customer.  For companies and consumer groups alike, this is a major missed opportunity. In an increasingly competitive marketplace, partnering with public interest groups to bake an authentic pro-consumer perspective into elements like design, safety, sustainability, and functionality can provide a coveted advantage. It gives a product the chance to stand out from the crowd, already destined for glowing reviews because problems were nipped in the bud thanks to guidance and data from those focused on consumers’ interests. And for the nonprofits, working proactively with businesses to help ensure that products reflect consumers’ values from the outset means a better, safer marketplace for everyone.  Zoom, in a nutshell  We’ve already seen the difference working together can make, especially if it’s early in a product’s introduction to consumers. Just look at Zoom. The videoconferencing platform, while launched as a tool for businesses, had not been introduced to a wide consumer audience before the COVID-19 pandemic made its services a global necessity. In early 2020—as Zoom was poised to explode from 10 million monthly users to more than 300 million by April—Consumer Reports’ (CR) testing experts went under the hood in our digital lab to assess it from a consumer well-being perspective.  CR uncovered serious flaws. These included a protocol allowing the company to collect users’ videos, call transcripts, and chats and use them for targeted advertising, as well as features that allowed hosts to record meetings in secret and alert them when a participant clicked away from the screen. At the precipice of a moment when elementary school classrooms to therapy sessions would be conducted over Zoom, there’s no telling what the fallout might have been—for the company or its customers—had these problems persisted.  But CR reached out to the business—and the business reached back. Within days, Zoom had worked with CR to solve a wide array of problems, helping strengthen its case as a lifeline for users all over the world.  Partnerships require new ways of thinking   Now imagine what could be possible if such a partnership began even earlier in the process. This is the relationship CR has worked to build with businesses, providing companies our testing expertise and data about consumers’ needs and desires. Our advisory services have led to us providing feedback on prototypes, and with feedback implemented earlier in the product development lifecycle, we’ve seen immediate impact for consumers: improved comfort of leg support in vehicles; privacy policy changes for electronics; reduced fees for a basic checking account; an improved washing machine drying algorithm for one brand; improved safety of active driver assistance systems; and strengthened digital payments app scam warnings before users finalize transactions. These partnerships have proven productive, but they remain the exception to the rule.  Building more of those cooperative, upstream relationships will require new thinking on both sides. Advocacy organizations must adopt an entrepreneurial spirit, leveraging their insights and expertise as a collaborator to companies they’re more accustomed to critiquing. Businesses must embrace these relationships as a central part of their research and development process, understanding that embedding pro-consumer values gives them a real edge in today’s hyper-social marketplace.  This cooperation is especially important in the modern digital era, when many consumers are making choices that reflect their principles and where products and services are growing increasingly complex. As the rise of AI-fueled products brings a new wave of threats and vulnerabilities in its wake, it is critical that businesses and public interest groups make an effort to forge strong relationships.  By coming together early and often around their common interest—the consumer—they can improve products, craft strong industry standards, burnish the reputation of companies that act responsibly, and help maintain the health and integrity of the marketplace.  Phil Radford is president and CEO of Consumer Reports. 
    0 Комментарии 0 Поделились 0 предпросмотр
  • #333;">Want to have a strategic design voice at work? Talk about desirability
    Desirability isn’t just about visual appeal: it’s one of the most important user factorsContinue reading on UX Collective »
    Want to have a strategic design voice at work? Talk about desirability
    Desirability isn’t just about visual appeal: it’s one of the most important user factorsContinue reading on UX Collective »
    المصدر: uxdesign.cc
    #want #have #strategic #design #voice #work #talk #about #desirability #isnt #just #visual #appeal #its #one #the #most #important #user #factorscontinue #reading #collective
    UXDESIGN.CC
    Want to have a strategic design voice at work? Talk about desirability
    Desirability isn’t just about visual appeal: it’s one of the most important user factorsContinue reading on UX Collective »
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com