• Selection Sort Time Complexity: Best, Worst, and Average Cases

    Development and Testing 

    Rate this post

    Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases.
    What Is Selection Sort?
    Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front.
    Let’s see an example:
    Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works
    Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps.
    Selection Sort Algorithm
    Here is the basic algorithm:

    Start from the first element
    Find the smallest in the rest of the list
    Swap it with the current element
    Repeat for each element

    This repeats until all elements are sorted.
    Selection Sort CodejavaCopyEditpublic class SelectionSort {
    public static void sort{
    int n = arr.length;
    for{
    int min = i;
    for{
    if{
    min = j;
    }
    }
    int temp = arr;
    arr= arr;
    arr= temp;
    }
    }
    }

    This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum.
    Selection Sort Time Complexity
    Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases.
    1. Best Case
    Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping.

    Time Complexity: OReason: Inner loop runs fully, regardless of the order
    Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same.
    2. Worst Case
    This happens when the array is in reverse order. But Selection Sort does not optimize for this.

    Time Complexity: OReason: Still needs full comparisons
    Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time.
    3. Average Case
    This is when elements are randomly placed. It is the most common scenario in real-world problems.

    Time Complexity: OReason: Still compares each element in the inner loop
    Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same.
    Why Is It Always O?
    Selection Sort compares all pairs of elements. The number of comparisons does not change.
    Total comparisons = n ×/ 2
    That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements.
    Space Complexity
    Selection Sort does not need extra space. It sorts in place.

    Space Complexity: OOnly a few variables are used
    No extra arrays or memory needed

    This is one good point of the Selection Sort.
    Comparison with Other Algorithms
    Let’s compare Selection Sort with other basic sorts:
    AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort.
    Advantages of Selection Sort

    Very simple and easy to understand
    Works well with small datasets
    Needs very little memory
    Good for learning purposes

    Disadvantages of Selection Sort

    Slow on large datasets
    Always takes the same time, even if sorted
    Not efficient for real-world use

    When to Use Selection Sort
    Use Selection Sort when:

    You are working with a very small dataset
    You want to teach or learn sorting logic
    You want stable, low-memory sorting

    Avoid it for:

    Large datasets
    Performance-sensitive programs

    Conclusion
    Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #selection #sort #time #complexity #best
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort CodejavaCopyEditpublic class SelectionSort { public static void sort{ int n = arr.length; for{ int min = i; for{ if{ min = j; } } int temp = arr; arr= arr; arr= temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: OReason: Inner loop runs fully, regardless of the order Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: OReason: Still needs full comparisons Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: OReason: Still compares each element in the inner loop Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n ×/ 2 That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: OOnly a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #selection #sort #time #complexity #best
    TECHWORLDTIMES.COM
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input: [5, 3, 8, 2]Step 1: Smallest is 2 → swap with 5 → [2, 3, 8, 5]Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 → [2, 3, 5, 8] Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort Code (Java Example) javaCopyEditpublic class SelectionSort { public static void sort(int[] arr) { int n = arr.length; for (int i = 0; i < n - 1; i++) { int min = i; for (int j = i + 1; j < n; j++) { if (arr[j] < arr[min]) { min = j; } } int temp = arr[min]; arr[min] = arr[i]; arr[i] = temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: O(n²) Reason: Inner loop runs fully, regardless of the order Example Input: [1, 2, 3, 4, 5] Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: O(n²) Reason: Still needs full comparisons Example Input: [5, 4, 3, 2, 1] Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: O(n²) Reason: Still compares each element in the inner loop Example Input: [3, 1, 4, 2, 5] Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O(n²)? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n × (n – 1) / 2 That’s why the time complexity is always O(n²).It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: O(1) Only a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortO(n²)O(n²)O(n²)O(1)Bubble SortO(n)O(n²)O(n²)O(1)Insertion SortO(n)O(n²)O(n²)O(1)Merge SortO(n log n)O(n log n)O(n log n)O(n)Quick SortO(n log n)O(n log n)O(n²)O(log n) As you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes O(n²) time, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Commentarii 0 Distribuiri
  • Trump’s military parade is a warning

    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    #trumpampamp8217s #military #parade #warning
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics #trumpampamp8217s #military #parade #warning
    WWW.VOX.COM
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics (even though Trump actually got the idea after attending the 2017 Bastille Day parade in Paris).Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College (speaking not for the military but in a personal capacity).That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocratic (and even questionably legal) activities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor (also speaking personally). “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actually [a deployment to] a blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    0 Commentarii 0 Distribuiri
  • Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista

    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation.
    Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.

     

     
     

     

    View this post on Instagram

     

     
     
     

     
     

     
     
     

     
     

    A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions.
    It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices.
    On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta.

    Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable.
    Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language.
    The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons.
    Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console.
    Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    #microsoft #trolls #apple039s #new #liquid
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency. #microsoft #trolls #apple039s #new #liquid
    WWW.TECHSPOT.COM
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by Windows (@windows) Liquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    0 Commentarii 0 Distribuiri
  • How jam jars explain Apple’s success

    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #how #jam #jars #explain #apples
    UXDESIGN.CC
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a category (choices) and the average customer review (satisfaction).Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Commentarii 0 Distribuiri
  • BenchmarkQED: Automated benchmarking of RAG systems

    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics. 
    To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.  
    BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks. 
    In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes.
    In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset. 
    Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.  
    AutoQ: Automated query synthesis
    This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset.
    AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum.
    Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum. 
    AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset.
    Figure 2. Synthesis process and example query for each of the four AutoQ query classes. 

    About Microsoft Research
    Advancing science and technology to benefit humanity

    View our story

    Opens in a new tab
    AutoE: Automated evaluation framework 
    Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation:

    Comprehensiveness: Does the answer address all relevant aspects of the question? 
    Diversity: Does it present varied perspectives or insights? 
    Empowerment: Does it help the reader understand and make informed judgments? 
    Relevance: Does it address what the question is specifically asking?  

    The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics.
    An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge.
    These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method.
    Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%.
    The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout.
    Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy.
    LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context.
    Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall.
    Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset.
    Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall.
    Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition. 
    AutoD: Automated data sampling and summarization
    Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities.
    The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations.
    AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited.
    Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.  
    We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub. 
    Opens in a new tab
    #benchmarkqedautomatedbenchmarking #ofrag #systems
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum. Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub.  Opens in a new tab #benchmarkqedautomatedbenchmarking #ofrag #systems
    WWW.MICROSOFT.COM
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation (RAG) as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub (opens in new tab). It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model (LLM) to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the query (Figure 1, top) forming a logical progression along the spectrum (Figure 1, bottom). Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class (200 total). AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG (LGR) configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditions (LGR_b200_c200, LGR_b50_c200, LGR_b50_c600, LGR_b200_c200_mini) differ by query budget (b50, b200) and chunk size (c200, c600). All used GPT-4o mini for relevance tests and GPT-4o for query expansion (to five subqueries) and answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG (Local, Global, and Drift Search), Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG (opens in new tab), RAPTOR (opens in new tab), and TREX (opens in new tab). All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model (GPT-4o), winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration (LGR_b200_c200). For DataLocal queries, the smaller budget (LGR_b50_c200) performed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk size (LGR_b50_c600) had a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG (LGR) over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clusters (breadth) and the number of samples per cluster (depth). This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech (opens in new tab) podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository (opens in new tab), alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools (opens in new tab), help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub (opens in new tab).  Opens in a new tab
    Like
    Love
    Wow
    Sad
    Angry
    487
    0 Commentarii 0 Distribuiri
  • Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’

    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.”

    Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a price tag for the standard edition.

    Related Stories

    Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity.

    Popular on Variety

    But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.”

    Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.”

    Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets.

    While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016, it’s just step one in the prolific producer’s plan to shake up the gaming industry.

    “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.”

    See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.”

    Where did the concept for “MindsEye” come from?

    I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister?

    We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways.

    So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful.

    With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.”

    Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box.

    Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP.

    Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge.

    As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent.

    “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too.

    We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off.

    What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone?

    At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.

    It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye.

    In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations.

    How do digital products Play.MindsEyeand Build.MindsEyetie in to plans for “MindsEye” and what BARB wants to offer gamers?

    In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation.

    As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent.

    For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye.

    How did you settle on IOI as publishing partner?

    We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces.

    This interview has been edited and condensed.
    #former #grand #theft #auto #chief
    Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’
    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.” Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a price tag for the standard edition. Related Stories Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity. Popular on Variety But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.” Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.” Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets. While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016, it’s just step one in the prolific producer’s plan to shake up the gaming industry. “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.” See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.” Where did the concept for “MindsEye” come from? I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister? We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways. So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful. With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.” Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box. Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP. Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge. As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent. “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too. We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off. What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone? At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts. It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye. In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations. How do digital products Play.MindsEyeand Build.MindsEyetie in to plans for “MindsEye” and what BARB wants to offer gamers? In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation. As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent. For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye. How did you settle on IOI as publishing partner? We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces. This interview has been edited and condensed. #former #grand #theft #auto #chief
    VARIETY.COM
    Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’
    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.” Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a $59.99 price tag for the standard edition. Related Stories Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity. Popular on Variety But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.” Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.” Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets. While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016 (Benzies was the lead “Grand Theft Auto” developer across the third through fifth games in the franchise, as well as “Grand Theft Auto Online,” and was in a legal battle with parent company Take Two over unpaid royalties from 2016 until 2019), it’s just step one in the prolific producer’s plan to shake up the gaming industry. “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.” See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.” Where did the concept for “MindsEye” come from? I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister? We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways. So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful. With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.” Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box. Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP. Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge. As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent. “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too. We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off. What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone? At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts. It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye. In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations. How do digital products Play.MindsEye (formerly named Arcadia) and Build.MindsEye (formerly Everywhere) tie in to plans for “MindsEye” and what BARB wants to offer gamers? In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation. As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent. For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye. How did you settle on IOI as publishing partner? We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces. This interview has been edited and condensed.
    0 Commentarii 0 Distribuiri
  • AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong

    Key Takeaways

    AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash.
    Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p.
    AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency.
    This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing.

    It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move.
    A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right? 
    Well, yes and no. 
    Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it. 
    Déjà Vu: We’ve Seen This Trick Before
    If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti. 
    They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July. 

    Source: Nvidia
    Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug. 
    Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles.
    The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting. 
    We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t. 
    It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too. 
    It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff. 
    Frank Azor Lights the Fuse on X
    The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card. 

    He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine. 
    It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now. 
    Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come. 
    Hardware Unboxed Fires Back
    The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM. 
    In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points.

    The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons. 
    In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly. 
    And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone.
    If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes. 
    Why This Actually Matters
    You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that.

    Source: AMD
    Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025?
    That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software. 
    If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone. 
    It’s like trying to make a movie with a flip phone because some people still own one.
    Same Name, Different Game
    Another big issue is how these cards are named and sold. 
    The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU. 
    But that extra memory makes a huge difference. 
    In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance. 
    You’re not. You’re getting a watered-down version with the same name and a silent asterisk.
    This isn’t just AMD’s Problem
    Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw. 
    It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice.
    Spoiler: they noticed.
    And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers. 
    If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t.
    Steam Data Doesn’t Help AMD’s Case
    AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now.

    Source: Nvidia
    But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable. 
    Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level.
    Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020. 
    Planned Obsolescence in Disguise
    Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026. 
    But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable.
    It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate. 
    So, Is AMD Actually Screwed? 
    Not right now. In fact, they’re playing the game better than they used to. 
    They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild. 
    But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them. 
    We don’t need two Nvidias. We need AMD to be different – to be better. 
    One Name, Two Very Different Cards
    The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill. 
    This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t. 
    Until then, buyers beware. There’s more going on behind the box art than meets the eye. 

    Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use. 
    Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives. 
    Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces. 
    In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands.
    Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects. 
    Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone. 

    View all articles by Anya Zhukova

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #amds #8gb #gamble #why #gamers
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #amds #8gb #gamble #why #gamers
    TECHREPORT.COM
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A $349 graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for $299. Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a $100 price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6 (still) on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over $300 on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for $219.99. A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer Tech (laptops, phones, wearables, etc.) Cybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commentarii 0 Distribuiri
  • Which Nintendo console had the biggest launch?

    Nintendo has predicted it will sell 15 million Switch 2s in its current financial year. Analysts think the number is conservative; Nintendo says the price of the Switch 2 is what’s holding back that estimate. But what does that mean, really? Is it a reasonable target? If Nintendo hits it, does that guarantee the Switch 2 will be a massive hit? If a console sells out at launch, what does that tell us? Is there still a chance of a Wii U-style flop?

    It’s impossible to know for sure, but a look at all of Nintendo’s past console launches can provide some clues. I’ve dug deep into past Nintendo sales figures to determine which Nintendo consoles enjoyed the best launches. To get a more reliable picture than that initial, almost inevitable sellout, I’ve defined the launch as the console’s first yearon the market.

    Note that Nintendo only started reporting quarterly sales for its systems in the mid-2000s, and the earliest data is annual at best and hard to come by, so some of these numbers are approximate. Note, too, that older systems had staggered launches across the three major markets, sometimes over several years, slowing down their potential sales.

    Still, there are some surprising results here that put Nintendo’s 15 million forecast for Switch 2 in context. Selling that many units would definitely not be bad news — but it doesn’t indicate a slam-dunk, either.

    1. Game Boy Advance

    First four quarters: approx. 18.1 million

    Lifetime: 81.51 millionRelease: March-June 2001

    2. 3DS

    First four quarters: 15.03 million

    Lifetime: 75.94 millionRelease: February-March 2011

    There’s a clear pattern to Nintendo’s two fastest sellers; they were successors to massive hitsin the handheld market, where Nintendo enjoyed total dominance. Nintendo was so bullish about Game Boy Advance, following the decade-long reign of the Game Boy format, that it forecast an astonishing 24 million sales in its first year, while 3DS followed Nintendo’s biggest seller ever. Both sold well, but neither quite lived up to their forebears.

    3. Switch

    First four quarters: 14.86 million

    Lifetime: 152.12 millionRelease: March 2017

    4. Wii

    First four quarters: 13.17 million

    Lifetime: 101.63 millionRelease: November-December 2006

    The Switch and the Wii are the only Nintendo consoles with sales over 10 million in the first year and over 100 million in their lifetimes. Quarter for quarter, they both sold incredibly consistently over time. This is surely what Nintendo would like all its hardware launches to look like, and what it’s hoping for with the Switch 2.

    5. DS

    First four quarters: 8.83 million

    Lifetime: 154.02mRelease: November 2004-March 2005

    It’s a surprise DS didn’t launch more strongly, considering the runaway early sales of its handheld predecessor, Game Boy Advance. But its launch games weren’t the best and it took a while for the console’s strange design to find its eventual massive casual audience. In its fifth quarter — holiday 2005 — it suddenly took off.

    6. GameCube

    First four quarters: approx. 6.7 million

    Lifetime: 21.74 millionRelease: Sep 2001-May 2002

    7. Nintendo 64

    First four quarters: 5.80 million

    Lifetime: 32.93 millionRelease: June 1996-March 1997

    The two Nintendo home consoles made during PlayStation’s ascendency enjoyed reasonably strong launches but petered out due to a lack of software support. GameCube sold 30% of its lifetime total in its first year on sale — an unfortunate achievement that beats even Wii U’s dismal ratio.

    8. Game Boy

    First four quarters: 3.93 million

    Lifetime: 118.69 millionRelease: April 1989-September 1990

    The legend of Game Boy is that it was an instant smash, thanks to the high-stakes acquisition of Tetris. But while its launch was healthy for the time, it didn’t begin to indicate what Nintendo’s defining handheld would achieve — sales in the first yearwere just 3% of what it would go on to sell across Game Boy and Game Boy Color.

    9. Wii U

    First four quarters: 3.91 million

    Lifetime: 13.56 millionRelease: November-December 2012

    Nintendo’s most recentflop actually started quite strongly, shifting over 3 million units in its first quarter — but then suffered a catastrophic drop-off, selling less than 900,000 worldwide during the rest of its first year on sale. Looking past the initial holiday sell-out, the signs of the disaster to come were clear.

    10. Super Famicom/SNES

    First four quarters: approx. 2.9 millionLifetime: 49.1 millionRelease: November 1990-June 1992 

    11. Famicom/NES

    Launch sales: 2.5 million by end of 1984Lifetime: 61.91 millionRelease: July 1983 to 1987 and later

    Both Nintendo’s early home consoles launched strongly in Japan but took a long time to take off in the West. The NES didn’t fully launch in the U.S. and Europe until 1986, amid caution after the 1980s video game crash, while the SNES was beaten to market in the West by the Sega Genesis and suffered as a result. 

    12. Virtual Boy

    Launch and lifetime sales: 770,000

    Release: July-August 1995

    You can’t get a worse launch than being discontinued within a year of going on sale!



    What does this mean for Switch 2’s launch?

    If Nintendo meets or exceeds its target of 15 million Switch 2s sold in its first financial year, it will rank among the top Nintendo console launches ever. A Wii U-style flop looks very unlikely, unless sales drop off sharply after its first few months.

    But the most telling comparisons here are Nintendo’s top two launches: Game Boy Advance and 3DS. Like Switch 2, they were both conservative, easy-to-understand sequels to huge sellers in a market sector Nintendo had total control of. And while both reached respectable lifetime totals, they got nowhere near the lifetime sales of their more innovative predecessors.

    Could this be Switch 2’s fate? Judging by Nintendo’s launch history, it’s possible — maybe even likely.
    #which #nintendo #console #had #biggest
    Which Nintendo console had the biggest launch?
    Nintendo has predicted it will sell 15 million Switch 2s in its current financial year. Analysts think the number is conservative; Nintendo says the price of the Switch 2 is what’s holding back that estimate. But what does that mean, really? Is it a reasonable target? If Nintendo hits it, does that guarantee the Switch 2 will be a massive hit? If a console sells out at launch, what does that tell us? Is there still a chance of a Wii U-style flop? It’s impossible to know for sure, but a look at all of Nintendo’s past console launches can provide some clues. I’ve dug deep into past Nintendo sales figures to determine which Nintendo consoles enjoyed the best launches. To get a more reliable picture than that initial, almost inevitable sellout, I’ve defined the launch as the console’s first yearon the market. Note that Nintendo only started reporting quarterly sales for its systems in the mid-2000s, and the earliest data is annual at best and hard to come by, so some of these numbers are approximate. Note, too, that older systems had staggered launches across the three major markets, sometimes over several years, slowing down their potential sales. Still, there are some surprising results here that put Nintendo’s 15 million forecast for Switch 2 in context. Selling that many units would definitely not be bad news — but it doesn’t indicate a slam-dunk, either. 1. Game Boy Advance First four quarters: approx. 18.1 million Lifetime: 81.51 millionRelease: March-June 2001 2. 3DS First four quarters: 15.03 million Lifetime: 75.94 millionRelease: February-March 2011 There’s a clear pattern to Nintendo’s two fastest sellers; they were successors to massive hitsin the handheld market, where Nintendo enjoyed total dominance. Nintendo was so bullish about Game Boy Advance, following the decade-long reign of the Game Boy format, that it forecast an astonishing 24 million sales in its first year, while 3DS followed Nintendo’s biggest seller ever. Both sold well, but neither quite lived up to their forebears. 3. Switch First four quarters: 14.86 million Lifetime: 152.12 millionRelease: March 2017 4. Wii First four quarters: 13.17 million Lifetime: 101.63 millionRelease: November-December 2006 The Switch and the Wii are the only Nintendo consoles with sales over 10 million in the first year and over 100 million in their lifetimes. Quarter for quarter, they both sold incredibly consistently over time. This is surely what Nintendo would like all its hardware launches to look like, and what it’s hoping for with the Switch 2. 5. DS First four quarters: 8.83 million Lifetime: 154.02mRelease: November 2004-March 2005 It’s a surprise DS didn’t launch more strongly, considering the runaway early sales of its handheld predecessor, Game Boy Advance. But its launch games weren’t the best and it took a while for the console’s strange design to find its eventual massive casual audience. In its fifth quarter — holiday 2005 — it suddenly took off. 6. GameCube First four quarters: approx. 6.7 million Lifetime: 21.74 millionRelease: Sep 2001-May 2002 7. Nintendo 64 First four quarters: 5.80 million Lifetime: 32.93 millionRelease: June 1996-March 1997 The two Nintendo home consoles made during PlayStation’s ascendency enjoyed reasonably strong launches but petered out due to a lack of software support. GameCube sold 30% of its lifetime total in its first year on sale — an unfortunate achievement that beats even Wii U’s dismal ratio. 8. Game Boy First four quarters: 3.93 million Lifetime: 118.69 millionRelease: April 1989-September 1990 The legend of Game Boy is that it was an instant smash, thanks to the high-stakes acquisition of Tetris. But while its launch was healthy for the time, it didn’t begin to indicate what Nintendo’s defining handheld would achieve — sales in the first yearwere just 3% of what it would go on to sell across Game Boy and Game Boy Color. 9. Wii U First four quarters: 3.91 million Lifetime: 13.56 millionRelease: November-December 2012 Nintendo’s most recentflop actually started quite strongly, shifting over 3 million units in its first quarter — but then suffered a catastrophic drop-off, selling less than 900,000 worldwide during the rest of its first year on sale. Looking past the initial holiday sell-out, the signs of the disaster to come were clear. 10. Super Famicom/SNES First four quarters: approx. 2.9 millionLifetime: 49.1 millionRelease: November 1990-June 1992  11. Famicom/NES Launch sales: 2.5 million by end of 1984Lifetime: 61.91 millionRelease: July 1983 to 1987 and later Both Nintendo’s early home consoles launched strongly in Japan but took a long time to take off in the West. The NES didn’t fully launch in the U.S. and Europe until 1986, amid caution after the 1980s video game crash, while the SNES was beaten to market in the West by the Sega Genesis and suffered as a result.  12. Virtual Boy Launch and lifetime sales: 770,000 Release: July-August 1995 You can’t get a worse launch than being discontinued within a year of going on sale! — What does this mean for Switch 2’s launch? If Nintendo meets or exceeds its target of 15 million Switch 2s sold in its first financial year, it will rank among the top Nintendo console launches ever. A Wii U-style flop looks very unlikely, unless sales drop off sharply after its first few months. But the most telling comparisons here are Nintendo’s top two launches: Game Boy Advance and 3DS. Like Switch 2, they were both conservative, easy-to-understand sequels to huge sellers in a market sector Nintendo had total control of. And while both reached respectable lifetime totals, they got nowhere near the lifetime sales of their more innovative predecessors. Could this be Switch 2’s fate? Judging by Nintendo’s launch history, it’s possible — maybe even likely. #which #nintendo #console #had #biggest
    WWW.POLYGON.COM
    Which Nintendo console had the biggest launch?
    Nintendo has predicted it will sell 15 million Switch 2s in its current financial year. Analysts think the number is conservative; Nintendo says the price of the Switch 2 is what’s holding back that estimate. But what does that mean, really? Is it a reasonable target? If Nintendo hits it, does that guarantee the Switch 2 will be a massive hit? If a console sells out at launch, what does that tell us? Is there still a chance of a Wii U-style flop? It’s impossible to know for sure, but a look at all of Nintendo’s past console launches can provide some clues. I’ve dug deep into past Nintendo sales figures to determine which Nintendo consoles enjoyed the best launches. To get a more reliable picture than that initial, almost inevitable sellout, I’ve defined the launch as the console’s first year (or rather, first four financial quarters) on the market. Note that Nintendo only started reporting quarterly sales for its systems in the mid-2000s, and the earliest data is annual at best and hard to come by, so some of these numbers are approximate. Note, too, that older systems had staggered launches across the three major markets (Japan, North America, and Europe), sometimes over several years, slowing down their potential sales. Still, there are some surprising results here that put Nintendo’s 15 million forecast for Switch 2 in context. Selling that many units would definitely not be bad news — but it doesn’t indicate a slam-dunk, either. 1. Game Boy Advance First four quarters: approx. 18.1 million Lifetime: 81.51 million (5th overall) Release: March-June 2001 2. 3DS First four quarters: 15.03 million Lifetime: 75.94 million (6th) Release: February-March 2011 There’s a clear pattern to Nintendo’s two fastest sellers; they were successors to massive hits (the Game Boy and DS) in the handheld market, where Nintendo enjoyed total dominance. Nintendo was so bullish about Game Boy Advance, following the decade-long reign of the Game Boy format, that it forecast an astonishing 24 million sales in its first year, while 3DS followed Nintendo’s biggest seller ever. Both sold well, but neither quite lived up to their forebears. 3. Switch First four quarters: 14.86 million Lifetime: 152.12 million (2nd, for now) Release: March 2017 4. Wii First four quarters: 13.17 million Lifetime: 101.63 million (4th) Release: November-December 2006 The Switch and the Wii are the only Nintendo consoles with sales over 10 million in the first year and over 100 million in their lifetimes. Quarter for quarter, they both sold incredibly consistently over time. This is surely what Nintendo would like all its hardware launches to look like, and what it’s hoping for with the Switch 2. 5. DS First four quarters: 8.83 million Lifetime: 154.02m (1st, for now) Release: November 2004-March 2005 It’s a surprise DS didn’t launch more strongly, considering the runaway early sales of its handheld predecessor, Game Boy Advance. But its launch games weren’t the best and it took a while for the console’s strange design to find its eventual massive casual audience. In its fifth quarter — holiday 2005 — it suddenly took off. 6. GameCube First four quarters: approx. 6.7 million Lifetime: 21.74 million (10th) Release: Sep 2001-May 2002 7. Nintendo 64 First four quarters: 5.80 million Lifetime: 32.93 million (9th) Release: June 1996-March 1997 The two Nintendo home consoles made during PlayStation’s ascendency enjoyed reasonably strong launches but petered out due to a lack of software support. GameCube sold 30% of its lifetime total in its first year on sale — an unfortunate achievement that beats even Wii U’s dismal ratio. 8. Game Boy First four quarters: 3.93 million Lifetime: 118.69 million (3rd) Release: April 1989-September 1990 The legend of Game Boy is that it was an instant smash, thanks to the high-stakes acquisition of Tetris. But while its launch was healthy for the time, it didn’t begin to indicate what Nintendo’s defining handheld would achieve — sales in the first year (before it reached Europe) were just 3% of what it would go on to sell across Game Boy and Game Boy Color. 9. Wii U First four quarters: 3.91 million Lifetime: 13.56 million (11th) Release: November-December 2012 Nintendo’s most recent (but not its worst) flop actually started quite strongly, shifting over 3 million units in its first quarter — but then suffered a catastrophic drop-off, selling less than 900,000 worldwide during the rest of its first year on sale. Looking past the initial holiday sell-out, the signs of the disaster to come were clear. 10. Super Famicom/SNES First four quarters: approx. 2.9 million (mostly Japan) Lifetime: 49.1 million (8th) Release: November 1990-June 1992  11. Famicom/NES Launch sales: 2.5 million by end of 1984 (Japan only) Lifetime: 61.91 million (7th) Release: July 1983 to 1987 and later Both Nintendo’s early home consoles launched strongly in Japan but took a long time to take off in the West. The NES didn’t fully launch in the U.S. and Europe until 1986, amid caution after the 1980s video game crash, while the SNES was beaten to market in the West by the Sega Genesis and suffered as a result.  12. Virtual Boy Launch and lifetime sales: 770,000 Release: July-August 1995 You can’t get a worse launch than being discontinued within a year of going on sale! — What does this mean for Switch 2’s launch? If Nintendo meets or exceeds its target of 15 million Switch 2s sold in its first financial year, it will rank among the top Nintendo console launches ever. A Wii U-style flop looks very unlikely, unless sales drop off sharply after its first few months. But the most telling comparisons here are Nintendo’s top two launches: Game Boy Advance and 3DS. Like Switch 2, they were both conservative, easy-to-understand sequels to huge sellers in a market sector Nintendo had total control of. And while both reached respectable lifetime totals, they got nowhere near the lifetime sales of their more innovative predecessors. Could this be Switch 2’s fate? Judging by Nintendo’s launch history, it’s possible — maybe even likely.
    0 Commentarii 0 Distribuiri