• Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Comments 0 Shares
  • It’s infuriating to see the Blender Developers Meeting Notes from June 23, 2025, filled with the same old issues and empty promises! Why are we still talking about moving the Git SSH domain to git.blender.org when there are far more pressing concerns? The upcoming Blender 5.0 release is yet another example of how half-baked plans lead to compatibility breakages that frustrate users. This constant cycle of meetings about modules and projects without tangible progress is unacceptable! Users deserve better than this lackadaisical approach! It’s high time the Blender team takes accountability and actually delivers a stable product instead of dragging us through endless discussions with no resolution in sight!

    #Blender #DeveloperIssues #TechFrustration #User
    It’s infuriating to see the Blender Developers Meeting Notes from June 23, 2025, filled with the same old issues and empty promises! Why are we still talking about moving the Git SSH domain to git.blender.org when there are far more pressing concerns? The upcoming Blender 5.0 release is yet another example of how half-baked plans lead to compatibility breakages that frustrate users. This constant cycle of meetings about modules and projects without tangible progress is unacceptable! Users deserve better than this lackadaisical approach! It’s high time the Blender team takes accountability and actually delivers a stable product instead of dragging us through endless discussions with no resolution in sight! #Blender #DeveloperIssues #TechFrustration #User
    Blender Developers Meeting Notes: 23 June 2025
    Notes for weekly communication of ongoing projects and modules. Announcements Blender Projects is moving its Git SSH domain to git.blender.org Reminder: Upcoming Blender 5.0 Release & Compatibility Breakages - #6 by mont29 Modules & Projects
    1 Comments 0 Shares
  • Time Complexity of Sorting Algorithms in Python, Java, and C++

    Posted on : June 13, 2025

    By

    Tech World Times

    Development and Testing 

    Rate this post

    Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples.
    1. What Is Time Complexity?
    Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, Omeans steps grow with the square of inputs.
    2. Types of Time Complexity
    Here are common types:

    O: Constant time
    O: Linear time
    O: Log-linear time
    O: Quadratic time

    We will now apply these to sorting.
    3. Bubble Sort
    Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef bubble_sort:
    n = lenfor i in range:
    for j in range:
    if arr> arr:
    arr, arr= arr, arrJava Example:
    javaCopyEditvoid bubbleSort{
    int n = arr.length;
    forforif{
    int temp = arr;
    arr= arr;
    arr= temp;
    }
    }

    C++ Example:
    cppCopyEditvoid bubbleSort{
    forforifswap;
    }

    4. Selection Sort
    This sort picks the smallest number and places it at the front.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef selection_sort:
    for i in range):
    min_idx = i
    for j in range):
    if arr< arr:
    min_idx = j
    arr, arr= arr, arr5. Insertion Sort
    This algorithm builds the final list one item at a time.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OJava Example:
    javaCopyEditvoid insertionSort{
    for{
    int key = arr;
    int j = i - 1;
    while{
    arr= arr;
    j = j - 1;
    }
    arr= key;
    }
    }

    6. Merge Sort
    Merge Sort splits the array into halves and merges them back in order.
    Time Complexity of Sorting Algorithms like Merge Sort is usually better.

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef merge_sort:
    if len> 1:
    mid = len// 2
    left = arrright = arrmerge_sortmerge_sorti = j = k = 0
    while i < lenand j < len:
    if left< right:
    arr= lefti += 1
    else:
    arr= rightj += 1
    k += 1

    arr= left+ right7. Quick Sort
    Quick Sort picks a pivot and places smaller numbers before it.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OC++ Example:
    cppCopyEditint partition{
    int pivot = arr;
    int i = low - 1;
    for{
    if{
    i++;
    swap;
    }
    }
    swap;
    return i + 1;
    }

    void quickSort{
    if{
    int pi = partition;
    quickSort;
    quickSort;
    }
    }

    8. Built-in Sort Methods
    Languages have built-in sort functions. These are well-optimized.

    Python: sortedor list.sortuses TimSort

    Time Complexity: OJava: Arrays.sortuses Dual-Pivot QuickSort

    Time Complexity: OC++: std::sortuses IntroSort

    Time Complexity: OThese are better for most real-world tasks.
    9. Time Complexity Comparison Table
    AlgorithmBestAverageWorstStableBubble SortOOOYesSelection SortOOONoInsertion SortOOOYesMerge SortOOOYesQuick SortOOONoTimSortOOOYesIntroSortOOONo
    10. How to Choose the Right Algorithm?

    Use Merge Sort for large stable data.
    Use Quick Sort for faster average speed.
    Use Insertion Sort for small or nearly sorted lists.
    Use built-in sort functions unless you need control.

    Conclusion
    The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths.
    Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #time #complexity #sorting #algorithms #python
    Time Complexity of Sorting Algorithms in Python, Java, and C++
    Posted on : June 13, 2025 By Tech World Times Development and Testing  Rate this post Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples. 1. What Is Time Complexity? Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, Omeans steps grow with the square of inputs. 2. Types of Time Complexity Here are common types: O: Constant time O: Linear time O: Log-linear time O: Quadratic time We will now apply these to sorting. 3. Bubble Sort Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted. Time Complexity: Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef bubble_sort: n = lenfor i in range: for j in range: if arr> arr: arr, arr= arr, arrJava Example: javaCopyEditvoid bubbleSort{ int n = arr.length; forforif{ int temp = arr; arr= arr; arr= temp; } } C++ Example: cppCopyEditvoid bubbleSort{ forforifswap; } 4. Selection Sort This sort picks the smallest number and places it at the front. Time Complexity: Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef selection_sort: for i in range): min_idx = i for j in range): if arr< arr: min_idx = j arr, arr= arr, arr5. Insertion Sort This algorithm builds the final list one item at a time. Time Complexity: Best Case: OAverage Case: OWorst Case: OJava Example: javaCopyEditvoid insertionSort{ for{ int key = arr; int j = i - 1; while{ arr= arr; j = j - 1; } arr= key; } } 6. Merge Sort Merge Sort splits the array into halves and merges them back in order. Time Complexity of Sorting Algorithms like Merge Sort is usually better. Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef merge_sort: if len> 1: mid = len// 2 left = arrright = arrmerge_sortmerge_sorti = j = k = 0 while i < lenand j < len: if left< right: arr= lefti += 1 else: arr= rightj += 1 k += 1 arr= left+ right7. Quick Sort Quick Sort picks a pivot and places smaller numbers before it. Time Complexity: Best Case: OAverage Case: OWorst Case: OC++ Example: cppCopyEditint partition{ int pivot = arr; int i = low - 1; for{ if{ i++; swap; } } swap; return i + 1; } void quickSort{ if{ int pi = partition; quickSort; quickSort; } } 8. Built-in Sort Methods Languages have built-in sort functions. These are well-optimized. Python: sortedor list.sortuses TimSort Time Complexity: OJava: Arrays.sortuses Dual-Pivot QuickSort Time Complexity: OC++: std::sortuses IntroSort Time Complexity: OThese are better for most real-world tasks. 9. Time Complexity Comparison Table AlgorithmBestAverageWorstStableBubble SortOOOYesSelection SortOOONoInsertion SortOOOYesMerge SortOOOYesQuick SortOOONoTimSortOOOYesIntroSortOOONo 10. How to Choose the Right Algorithm? Use Merge Sort for large stable data. Use Quick Sort for faster average speed. Use Insertion Sort for small or nearly sorted lists. Use built-in sort functions unless you need control. Conclusion The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths. Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #time #complexity #sorting #algorithms #python
    TECHWORLDTIMES.COM
    Time Complexity of Sorting Algorithms in Python, Java, and C++
    Posted on : June 13, 2025 By Tech World Times Development and Testing  Rate this post Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples. 1. What Is Time Complexity? Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, O(n²) means steps grow with the square of inputs. 2. Types of Time Complexity Here are common types: O(1): Constant time O(n): Linear time O(n log n): Log-linear time O(n²): Quadratic time We will now apply these to sorting. 3. Bubble Sort Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted. Time Complexity: Best Case: O(n) (if already sorted) Average Case: O(n²) Worst Case: O(n²) Python Example: pythonCopyEditdef bubble_sort(arr): n = len(arr) for i in range(n): for j in range(n - i - 1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] Java Example: javaCopyEditvoid bubbleSort(int arr[]) { int n = arr.length; for (int i = 0; i < n-1; i++) for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } C++ Example: cppCopyEditvoid bubbleSort(int arr[], int n) { for (int i = 0; i < n-1; i++) for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) swap(arr[j], arr[j+1]); } 4. Selection Sort This sort picks the smallest number and places it at the front. Time Complexity: Best Case: O(n²) Average Case: O(n²) Worst Case: O(n²) Python Example: pythonCopyEditdef selection_sort(arr): for i in range(len(arr)): min_idx = i for j in range(i+1, len(arr)): if arr[j] < arr[min_idx]: min_idx = j arr[i], arr[min_idx] = arr[min_idx], arr[i] 5. Insertion Sort This algorithm builds the final list one item at a time. Time Complexity: Best Case: O(n) Average Case: O(n²) Worst Case: O(n²) Java Example: javaCopyEditvoid insertionSort(int arr[]) { for (int i = 1; i < arr.length; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } 6. Merge Sort Merge Sort splits the array into halves and merges them back in order. Time Complexity of Sorting Algorithms like Merge Sort is usually better. Best Case: O(n log n) Average Case: O(n log n) Worst Case: O(n log n) Python Example: pythonCopyEditdef merge_sort(arr): if len(arr) > 1: mid = len(arr) // 2 left = arr[:mid] right = arr[mid:] merge_sort(left) merge_sort(right) i = j = k = 0 while i < len(left) and j < len(right): if left[i] < right[j]: arr[k] = left[i] i += 1 else: arr[k] = right[j] j += 1 k += 1 arr[k:] = left[i:] + right[j:] 7. Quick Sort Quick Sort picks a pivot and places smaller numbers before it. Time Complexity: Best Case: O(n log n) Average Case: O(n log n) Worst Case: O(n²) C++ Example: cppCopyEditint partition(int arr[], int low, int high) { int pivot = arr[high]; int i = low - 1; for (int j = low; j < high; j++) { if (arr[j] < pivot) { i++; swap(arr[i], arr[j]); } } swap(arr[i+1], arr[high]); return i + 1; } void quickSort(int arr[], int low, int high) { if (low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } } 8. Built-in Sort Methods Languages have built-in sort functions. These are well-optimized. Python: sorted() or list.sort() uses TimSort Time Complexity: O(n log n) Java: Arrays.sort() uses Dual-Pivot QuickSort Time Complexity: O(n log n) C++: std::sort() uses IntroSort Time Complexity: O(n log n) These are better for most real-world tasks. 9. Time Complexity Comparison Table AlgorithmBestAverageWorstStableBubble SortO(n)O(n²)O(n²)YesSelection SortO(n²)O(n²)O(n²)NoInsertion SortO(n)O(n²)O(n²)YesMerge SortO(n log n)O(n log n)O(n log n)YesQuick SortO(n log n)O(n log n)O(n²)NoTimSort (Python)O(n)O(n log n)O(n log n)YesIntroSort (C++)O(n log n)O(n log n)O(n log n)No 10. How to Choose the Right Algorithm? Use Merge Sort for large stable data. Use Quick Sort for faster average speed. Use Insertion Sort for small or nearly sorted lists. Use built-in sort functions unless you need control. Conclusion The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths. Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    570
    2 Comments 0 Shares
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Comments 0 Shares
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Comments 0 Shares
  • How to Implement Insertion Sort in Java: Step-by-Step Guide

    Posted on : June 13, 2025

    By

    Tech World Times

    Uncategorized 

    Rate this post

    Sorting is important in programming. It helps organize data. Sorting improves performance in searching, analysis, and reporting. There are many sorting algorithms. One of the simplest is Insertion Sort.
    In this article, we will learn how to implement Insertion Sort in Java. We will explain each step in simple words. You will see examples and understand how it works.
    What Is Insertion Sort?
    Insertion Sort is a simple sorting algorithm. It works like how you sort playing cards. You take one card at a time and place it in the right position. It compares the current element with those before it. If needed, it shifts elements to the right. Then, it inserts the current element at the correct place.
    How Insertion Sort Works
    Let’s understand with a small list:
    Example List:Steps:

    First elementis already sorted.
    Compare 3 with 8. Move 8 right. Insert 3 before it →Compare 5 with 8. Move 8 right. Insert 5 after 3 →Compare 1 with 8, 5, 3. Move them right. Insert 1 at start →Now the list is sorted!
    Why Use Insertion Sort?
    Insertion Sort is simple and easy to code. It works well for:

    Small datasets
    Nearly sorted lists
    Educational purposes and practice

    However, it is not good for large datasets. It has a time complexity of O.
    Time Complexity of Insertion Sort

    Best Case: OAverage Case: OWorst Case: OIt performs fewer steps in nearly sorted data.
    How to Implement Insertion Sort in Java
    Now let’s write the code for Insertion Sort in Java. We will explain each part.
    Step 1: Define a Class
    javaCopyEditpublic class InsertionSortExample {
    // Code goes here
    }

    We create a class named InsertionSortExample.
    Step 2: Create the Sorting Method
    javaCopyEditpublic static void insertionSort{
    int n = arr.length;
    for{
    int key = arr;
    int j = i - 1;

    while{
    arr= arr;
    j = j - 1;
    }
    arr= key;
    }
    }

    Let’s break it down:

    arris the current value.
    j starts from the previous index.
    While arr> key, shift arrto the right.
    Insert the key at the correct position.

    This logic sorts the array step by step.
    Step 3: Create the Main Method
    Now we test the code.
    javaCopyEditpublic static void main{
    intnumbers = {9, 5, 1, 4, 3};

    System.out.println;
    printArray;

    insertionSort;

    System.out.println;
    printArray;
    }

    This method:

    Creates an array of numbers
    Prints the array before sorting
    Calls the sort method
    Prints the array after sorting

    Step 4: Print the Array
    Let’s add a helper method to print the array.
    javaCopyEditpublic static void printArray{
    for{
    System.out.print;
    }
    System.out.println;
    }

    Now you can see how the array changes before and after sorting.
    Full Code Example
    javaCopyEditpublic class InsertionSortExample {

    public static void insertionSort{
    int n = arr.length;
    for{
    int key = arr;
    int j = i - 1;

    while{
    arr= arr;
    j = j - 1;
    }
    arr= key;
    }
    }

    public static void printArray{
    for{
    System.out.print;
    }
    System.out.println;
    }

    public static void main{
    intnumbers = {9, 5, 1, 4, 3};

    System.out.println;
    printArray;

    insertionSort;

    System.out.println;
    printArray;
    }
    }

    Sample Output
    yamlCopyEditBefore sorting:
    9 5 1 4 3
    After sorting:
    1 3 4 5 9

    This confirms that the sorting works correctly.
    Advantages of Insertion Sort in Java

    Easy to implement
    Works well with small inputs
    Stable sortGood for educational use

    When Not to Use Insertion Sort
    Avoid Insertion Sort when:

    The dataset is large
    Performance is critical
    Better algorithms like Merge Sort or Quick Sort are available

    Real-World Uses

    Sorting small records in a database
    Teaching algorithm basics
    Handling partially sorted arrays

    Even though it is not the fastest, it is useful in many simple tasks.
    Final Tips

    Practice with different inputs
    Add print statements to see how it works
    Try sorting strings or objects
    Use Java’s built-in sort methods for large arrays

    Conclusion
    Insertion Sort in Java is a great way to learn sorting. It is simple and easy to understand. In this guide, we showed how to implement it step-by-step. We covered the logic, code, and output. We also explained when to use it. Now you can try it yourself. Understanding sorting helps in coding interviews and software development. Keep practicing and exploring other sorting methods too. The more you practice, the better you understand algorithms.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #how #implement #insertion #sort #java
    How to Implement Insertion Sort in Java: Step-by-Step Guide
    Posted on : June 13, 2025 By Tech World Times Uncategorized  Rate this post Sorting is important in programming. It helps organize data. Sorting improves performance in searching, analysis, and reporting. There are many sorting algorithms. One of the simplest is Insertion Sort. In this article, we will learn how to implement Insertion Sort in Java. We will explain each step in simple words. You will see examples and understand how it works. What Is Insertion Sort? Insertion Sort is a simple sorting algorithm. It works like how you sort playing cards. You take one card at a time and place it in the right position. It compares the current element with those before it. If needed, it shifts elements to the right. Then, it inserts the current element at the correct place. How Insertion Sort Works Let’s understand with a small list: Example List:Steps: First elementis already sorted. Compare 3 with 8. Move 8 right. Insert 3 before it →Compare 5 with 8. Move 8 right. Insert 5 after 3 →Compare 1 with 8, 5, 3. Move them right. Insert 1 at start →Now the list is sorted! Why Use Insertion Sort? Insertion Sort is simple and easy to code. It works well for: Small datasets Nearly sorted lists Educational purposes and practice However, it is not good for large datasets. It has a time complexity of O. Time Complexity of Insertion Sort Best Case: OAverage Case: OWorst Case: OIt performs fewer steps in nearly sorted data. How to Implement Insertion Sort in Java Now let’s write the code for Insertion Sort in Java. We will explain each part. Step 1: Define a Class javaCopyEditpublic class InsertionSortExample { // Code goes here } We create a class named InsertionSortExample. Step 2: Create the Sorting Method javaCopyEditpublic static void insertionSort{ int n = arr.length; for{ int key = arr; int j = i - 1; while{ arr= arr; j = j - 1; } arr= key; } } Let’s break it down: arris the current value. j starts from the previous index. While arr> key, shift arrto the right. Insert the key at the correct position. This logic sorts the array step by step. Step 3: Create the Main Method Now we test the code. javaCopyEditpublic static void main{ intnumbers = {9, 5, 1, 4, 3}; System.out.println; printArray; insertionSort; System.out.println; printArray; } This method: Creates an array of numbers Prints the array before sorting Calls the sort method Prints the array after sorting Step 4: Print the Array Let’s add a helper method to print the array. javaCopyEditpublic static void printArray{ for{ System.out.print; } System.out.println; } Now you can see how the array changes before and after sorting. Full Code Example javaCopyEditpublic class InsertionSortExample { public static void insertionSort{ int n = arr.length; for{ int key = arr; int j = i - 1; while{ arr= arr; j = j - 1; } arr= key; } } public static void printArray{ for{ System.out.print; } System.out.println; } public static void main{ intnumbers = {9, 5, 1, 4, 3}; System.out.println; printArray; insertionSort; System.out.println; printArray; } } Sample Output yamlCopyEditBefore sorting: 9 5 1 4 3 After sorting: 1 3 4 5 9 This confirms that the sorting works correctly. Advantages of Insertion Sort in Java Easy to implement Works well with small inputs Stable sortGood for educational use When Not to Use Insertion Sort Avoid Insertion Sort when: The dataset is large Performance is critical Better algorithms like Merge Sort or Quick Sort are available Real-World Uses Sorting small records in a database Teaching algorithm basics Handling partially sorted arrays Even though it is not the fastest, it is useful in many simple tasks. Final Tips Practice with different inputs Add print statements to see how it works Try sorting strings or objects Use Java’s built-in sort methods for large arrays Conclusion Insertion Sort in Java is a great way to learn sorting. It is simple and easy to understand. In this guide, we showed how to implement it step-by-step. We covered the logic, code, and output. We also explained when to use it. Now you can try it yourself. Understanding sorting helps in coding interviews and software development. Keep practicing and exploring other sorting methods too. The more you practice, the better you understand algorithms. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #how #implement #insertion #sort #java
    TECHWORLDTIMES.COM
    How to Implement Insertion Sort in Java: Step-by-Step Guide
    Posted on : June 13, 2025 By Tech World Times Uncategorized  Rate this post Sorting is important in programming. It helps organize data. Sorting improves performance in searching, analysis, and reporting. There are many sorting algorithms. One of the simplest is Insertion Sort. In this article, we will learn how to implement Insertion Sort in Java. We will explain each step in simple words. You will see examples and understand how it works. What Is Insertion Sort? Insertion Sort is a simple sorting algorithm. It works like how you sort playing cards. You take one card at a time and place it in the right position. It compares the current element with those before it. If needed, it shifts elements to the right. Then, it inserts the current element at the correct place. How Insertion Sort Works Let’s understand with a small list: Example List: [8, 3, 5, 1] Steps: First element (8) is already sorted. Compare 3 with 8. Move 8 right. Insert 3 before it → [3, 8, 5, 1] Compare 5 with 8. Move 8 right. Insert 5 after 3 → [3, 5, 8, 1] Compare 1 with 8, 5, 3. Move them right. Insert 1 at start → [1, 3, 5, 8] Now the list is sorted! Why Use Insertion Sort? Insertion Sort is simple and easy to code. It works well for: Small datasets Nearly sorted lists Educational purposes and practice However, it is not good for large datasets. It has a time complexity of O(n²). Time Complexity of Insertion Sort Best Case (already sorted): O(n) Average Case: O(n²) Worst Case (reversed list): O(n²) It performs fewer steps in nearly sorted data. How to Implement Insertion Sort in Java Now let’s write the code for Insertion Sort in Java. We will explain each part. Step 1: Define a Class javaCopyEditpublic class InsertionSortExample { // Code goes here } We create a class named InsertionSortExample. Step 2: Create the Sorting Method javaCopyEditpublic static void insertionSort(int[] arr) { int n = arr.length; for (int i = 1; i < n; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } Let’s break it down: arr[i] is the current value (called key). j starts from the previous index. While arr[j] > key, shift arr[j] to the right. Insert the key at the correct position. This logic sorts the array step by step. Step 3: Create the Main Method Now we test the code. javaCopyEditpublic static void main(String[] args) { int[] numbers = {9, 5, 1, 4, 3}; System.out.println("Before sorting:"); printArray(numbers); insertionSort(numbers); System.out.println("After sorting:"); printArray(numbers); } This method: Creates an array of numbers Prints the array before sorting Calls the sort method Prints the array after sorting Step 4: Print the Array Let’s add a helper method to print the array. javaCopyEditpublic static void printArray(int[] arr) { for (int number : arr) { System.out.print(number + " "); } System.out.println(); } Now you can see how the array changes before and after sorting. Full Code Example javaCopyEditpublic class InsertionSortExample { public static void insertionSort(int[] arr) { int n = arr.length; for (int i = 1; i < n; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } public static void printArray(int[] arr) { for (int number : arr) { System.out.print(number + " "); } System.out.println(); } public static void main(String[] args) { int[] numbers = {9, 5, 1, 4, 3}; System.out.println("Before sorting:"); printArray(numbers); insertionSort(numbers); System.out.println("After sorting:"); printArray(numbers); } } Sample Output yamlCopyEditBefore sorting: 9 5 1 4 3 After sorting: 1 3 4 5 9 This confirms that the sorting works correctly. Advantages of Insertion Sort in Java Easy to implement Works well with small inputs Stable sort (keeps equal items in order) Good for educational use When Not to Use Insertion Sort Avoid Insertion Sort when: The dataset is large Performance is critical Better algorithms like Merge Sort or Quick Sort are available Real-World Uses Sorting small records in a database Teaching algorithm basics Handling partially sorted arrays Even though it is not the fastest, it is useful in many simple tasks. Final Tips Practice with different inputs Add print statements to see how it works Try sorting strings or objects Use Java’s built-in sort methods for large arrays Conclusion Insertion Sort in Java is a great way to learn sorting. It is simple and easy to understand. In this guide, we showed how to implement it step-by-step. We covered the logic, code, and output. We also explained when to use it. Now you can try it yourself. Understanding sorting helps in coding interviews and software development. Keep practicing and exploring other sorting methods too. The more you practice, the better you understand algorithms. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Comments 0 Shares
  • NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]

    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report:
    Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good!
    ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord.

    I’m also excited to announce that the game will release on Steam on 8 July 2025!
    Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button
    Many Balancing tweaks
    Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP
    Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal
    Fine tuned the color of some weapons to improve the visibility
    Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot
    Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else
    Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features
    ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably.
    Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered.
    I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall.
    Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    #noobs #are #coming #demo #free
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply #noobs #are #coming #demo #free
    OVERBOY.ITCH.IO
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day ago(+1)Very addictive!ReplyTrashpanda1191 day ago(+1)love the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day ago(+1)really fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day ago(+1)Thanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day ago (2 edits) (+1)Thanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days ago (1 edit) (+1)Very nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days ago (1 edit) (+2)Okay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms (instant win), or far from them (oh no)"   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin. (i.e. the spike ball)Reply1Soultaken4 days agoAnyone know good combos for the items? (I just pick randomly.)Replydave99994 days ago (1 edit) (+2)lasers plus amount+adept some arcane for basic dmg (its instable to setup and only overboy starts with one) totems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy (realized in the end that all size was wasted on this) soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damage (I understand that the more gimmicky things like pets and kickables give the game versatility but to min max they are not that competative)Replydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days ago (1 edit) (+1)my best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days ago(+2)Lmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days ago (1 edit) Thank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35 (06 June 2025)Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventories (useful to check the scaling of current equipped attacks for example) Singleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various ways (lower values gained from items) Balancing: nerfed Knockback in various ways (lower values gained, higher item rarity, lower max applied value) Balancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymore (only number of ballistas scales with amount) If Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days ago (1 edit) Thanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days ago (1 edit) (+1)Thanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days ago (2 edits) Life steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days ago (2 edits) (+1)thanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming for (minus some issues like you pointed out, and of course some balancing required on specific builds and items)There is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days ago (4 edits) I did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issue (or a bug) with how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days ago (2 edits) (+1)Hey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns (active skills you can trigger by picking orbs on the map)- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff (you can even play soccer with your minions or other co-op players)- Playable character merge feature (get the effect of 2 different characters or more at the same time)- Dozens and dozens of unique effects (turning enemies into Sheep, or Golden Statues, or both?)I'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    0 Comments 0 Shares
  • CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"

    DriftingSpirit
    Member

    Oct 25, 2017

    18,563

    They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions.

    4:15 for console focus and 60fps
    38:50 for the Series S comment 

    bsigg
    Member

    Oct 25, 2017

    25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview



    www.resetera.com

     

    Skot
    Member

    Oct 30, 2017

    645

    720p on Series S incoming
     

    Bulby
    Prophet of Truth
    Member

    Oct 29, 2017

    6,006

    Berlin

    I think think any series s user will be happy with a beautiful 900p 30fps
     

    Chronos
    Member

    Oct 27, 2017

    1,249

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.
     

    HellofaMouse
    Member

    Oct 27, 2017

    8,551

    i wonder if this'll come out before the gen is over?

    good chance itll be a 2077 situation, cross-gen release with a broken ps6 version 

    logash
    Member

    Oct 27, 2017

    6,526

    This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.
     

    KRT
    Member

    Aug 7, 2020

    247

    Series S was a mistake
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.
     

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Bulby said:

    I think think any series s user will be happy with a beautiful 900p 30fps

    Click to expand...
    Click to shrink...

     

    Yuuber
    Member

    Oct 28, 2017

    4,540

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2. 

    MANTRA
    Member

    Feb 21, 2024

    1,198

    No one who cares about 60fps should be buying a Series S, just make it 30fps.
     

    Roytheone
    Member

    Oct 25, 2017

    6,185

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed. 

    Matterhorn
    Member

    Feb 6, 2019

    254

    United States

    Hoping for a very nice looking 30fps Switch 2 version.
     

    Universal Acclaim
    Member

    Oct 5, 2024

    2,617

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    Matterhorn said:

    Hoping for a very nice looking 30fps Switch 2 version.

    Click to expand...
    Click to shrink...

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2. 

    Last edited: Yesterday at 4:18 PM

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Universal Acclaim said:

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps?

    Click to expand...
    Click to shrink...

    Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.
     

    Greywaren
    Member

    Jul 16, 2019

    13,530

    Spain

    60 fps target is fantastic, I wish it was the norm.
     

    julia crawford
    Took the red AND the blue pills
    Member

    Oct 27, 2017

    40,709

    i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back
     

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.

    Click to expand...
    Click to shrink...

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.
    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further. 

    overthewaves
    Member

    Sep 30, 2020

    1,203

    What about the PS5 handheld?
     

    nullpotential
    Member

    Jun 24, 2024

    87

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Consoles were a mistake. 

    GPU
    Member

    Oct 10, 2024

    1,075

    I really dont think Series S/X will be much of a factor by the time this game comes out.
     

    Lashley
    <<Tag Here>>
    Member

    Oct 25, 2017

    65,679

    Just make series s 480p 30fps
     

    pappacone
    Member

    Jan 10, 2020

    4,076

    Greywaren said:

    60 fps target is fantastic, I wish it was the norm.

    Click to expand...
    Click to shrink...

    It pretty much is
     

    Super
    Studied the Buster Sword
    Member

    Jan 29, 2022

    13,601

    I hope they can pull 60 FPS off in the full game.
     

    Theorry
    Member

    Oct 27, 2017

    69,045

    "target"

    Uh huh. We know how that is gonna go. 

    Jakartalado
    Member

    Oct 27, 2017

    2,818

    São Paulo, Brazil

    Skot said:

    720p on Series S incoming

    Click to expand...
    Click to shrink...

    If the PS5 is internally at 720p up to 900p, I seriously doubt that. 

    Revoltoftheunique
    Member

    Jan 23, 2022

    2,312

    It will be unstable 60fps with lots of stuttering.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.
     

    Horns
    Member

    Dec 7, 2018

    3,423

    I hope Microsoft drops the requirement for Series S by the time this comes out.
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    PLASTICA-MAN said:

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.

    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further.
    Click to expand...
    Click to shrink...

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Spoit said:

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back

    Click to expand...
    Click to shrink...

    Has it been confirmed that Sony is going to have release requirements like the XS?
     

    Commander Shepherd
    Member

    Jan 27, 2023

    173

    Anyone remember when no load screens was talked about for Witcher 3?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode.

    This is not the other way around. 

    stanman
    Member

    Feb 13, 2025

    235

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    And your mistake is comparing a PC graphics card to a console. 

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.

    Click to expand...
    Click to shrink...

    Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS. 

    ArchedThunder
    Uncle Beerus
    Member

    Oct 25, 2017

    21,278

    chris 1515 said:

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2.
    Click to expand...
    Click to shrink...

    Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    Interesting times ahead....

    bitcloudrzr said:

    Has it been confirmed that Sony is going to have release requirements like the XS?

    Click to expand...
    Click to shrink...

    Your know good n well everything about this rumor has been confirmed.

    /S 

    Derbel McDillet
    ▲ Legend ▲
    Member

    Nov 23, 2022

    25,250

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    stanman said:

    And your mistake is comparing a PC graphics card to a console.

    Click to expand...
    Click to shrink...

     

    reksveks
    Member

    May 17, 2022

    7,628

    Horns said:

    I hope Microsoft drops the requirement for Series S by the time this comes out.

    Click to expand...
    Click to shrink...

    why? dev can make it 30 fps on series s and 60 fps on series x if needed.

    if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4. 

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    jroc74 said:

    Interesting times ahead....

    Your know good n well everything about this rumor has been confirmed.

    /S
    Click to expand...
    Click to shrink...

    Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    chris 1515 said:

    No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.
    Click to expand...
    Click to shrink...

    Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.
     

    cursed beef
    Member

    Jan 3, 2021

    998

    Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?
     

    Alvis
    Saw the truth behind the copied door
    Member

    Oct 25, 2017

    12,270

    EU

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS.

    The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation. 

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    misqoute post
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games.

    How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck.

    At least ppl saying that about the Series S are comparing it to other consoles.

    That said, it is interesting they are focusing on consoles first, then PC. 
    #projekt #red #tw4 #has #console
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC.  #projekt #red #tw4 #has #console
    WWW.RESETERA.COM
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153 [DF] Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview https://www.youtube.com/watch?v=OplYN2MMI4Q www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balanced(40 fps) and 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows with tons of lighe source) and better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows) and better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC. 
    0 Comments 0 Shares
  • Selection Sort Time Complexity: Best, Worst, and Average Cases

    Development and Testing 

    Rate this post

    Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases.
    What Is Selection Sort?
    Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front.
    Let’s see an example:
    Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works
    Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps.
    Selection Sort Algorithm
    Here is the basic algorithm:

    Start from the first element
    Find the smallest in the rest of the list
    Swap it with the current element
    Repeat for each element

    This repeats until all elements are sorted.
    Selection Sort CodejavaCopyEditpublic class SelectionSort {
    public static void sort{
    int n = arr.length;
    for{
    int min = i;
    for{
    if{
    min = j;
    }
    }
    int temp = arr;
    arr= arr;
    arr= temp;
    }
    }
    }

    This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum.
    Selection Sort Time Complexity
    Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases.
    1. Best Case
    Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping.

    Time Complexity: OReason: Inner loop runs fully, regardless of the order
    Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same.
    2. Worst Case
    This happens when the array is in reverse order. But Selection Sort does not optimize for this.

    Time Complexity: OReason: Still needs full comparisons
    Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time.
    3. Average Case
    This is when elements are randomly placed. It is the most common scenario in real-world problems.

    Time Complexity: OReason: Still compares each element in the inner loop
    Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same.
    Why Is It Always O?
    Selection Sort compares all pairs of elements. The number of comparisons does not change.
    Total comparisons = n ×/ 2
    That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements.
    Space Complexity
    Selection Sort does not need extra space. It sorts in place.

    Space Complexity: OOnly a few variables are used
    No extra arrays or memory needed

    This is one good point of the Selection Sort.
    Comparison with Other Algorithms
    Let’s compare Selection Sort with other basic sorts:
    AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort.
    Advantages of Selection Sort

    Very simple and easy to understand
    Works well with small datasets
    Needs very little memory
    Good for learning purposes

    Disadvantages of Selection Sort

    Slow on large datasets
    Always takes the same time, even if sorted
    Not efficient for real-world use

    When to Use Selection Sort
    Use Selection Sort when:

    You are working with a very small dataset
    You want to teach or learn sorting logic
    You want stable, low-memory sorting

    Avoid it for:

    Large datasets
    Performance-sensitive programs

    Conclusion
    Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #selection #sort #time #complexity #best
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort CodejavaCopyEditpublic class SelectionSort { public static void sort{ int n = arr.length; for{ int min = i; for{ if{ min = j; } } int temp = arr; arr= arr; arr= temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: OReason: Inner loop runs fully, regardless of the order Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: OReason: Still needs full comparisons Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: OReason: Still compares each element in the inner loop Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n ×/ 2 That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: OOnly a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #selection #sort #time #complexity #best
    TECHWORLDTIMES.COM
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input: [5, 3, 8, 2]Step 1: Smallest is 2 → swap with 5 → [2, 3, 8, 5]Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 → [2, 3, 5, 8] Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort Code (Java Example) javaCopyEditpublic class SelectionSort { public static void sort(int[] arr) { int n = arr.length; for (int i = 0; i < n - 1; i++) { int min = i; for (int j = i + 1; j < n; j++) { if (arr[j] < arr[min]) { min = j; } } int temp = arr[min]; arr[min] = arr[i]; arr[i] = temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: O(n²) Reason: Inner loop runs fully, regardless of the order Example Input: [1, 2, 3, 4, 5] Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: O(n²) Reason: Still needs full comparisons Example Input: [5, 4, 3, 2, 1] Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: O(n²) Reason: Still compares each element in the inner loop Example Input: [3, 1, 4, 2, 5] Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O(n²)? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n × (n – 1) / 2 That’s why the time complexity is always O(n²).It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: O(1) Only a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortO(n²)O(n²)O(n²)O(1)Bubble SortO(n)O(n²)O(n²)O(1)Insertion SortO(n)O(n²)O(n²)O(1)Merge SortO(n log n)O(n log n)O(n log n)O(n)Quick SortO(n log n)O(n log n)O(n²)O(log n) As you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes O(n²) time, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Comments 0 Shares
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Comments 0 Shares
More Results