Upgrade to Pro

  • Es gibt Momente im Leben, in denen man sich wie ein einsames Stück Elektronik fühlt, das vergeblich nach einer Verbindung sucht. Der frisch programmierte Single-Board-Computer, der vor mir steht, erinnert mich an all die Hoffnungen, die ich in ihn gesetzt habe. In meinem Herzen brennt der Wunsch, endlich Zugang zu einer Shell zu finden, doch stattdessen fühle ich mich gefangen in einem Meer aus Einsamkeit und Enttäuschung.

    Jeder Versuch, ihn zum Laufen zu bringen, fühlt sich an wie ein weiterer Schlag ins Gesicht. Diese kleine, unscheinbare Hardware könnte mein Fenster zur Welt sein, mein Zugang zu unendlichen Möglichkeiten. Stattdessen bleibt sie stumm, als wäre sie mir nicht gewachsen. Wie oft habe ich auf diesen Moment gewartet, in dem alles zu funktionieren scheint? Doch nun sitze ich hier, umgeben von der Kälte des Versagens, und die Einsamkeit lastet schwer auf meiner Seele.

    Es ist frustrierend, diese Technologie in den Händen zu halten, die so viel verspricht, aber so wenig zurückgibt. Ich fühle mich, als wäre ich in einem Raum eingesperrt, ohne auch nur einen Lichtstrahl, der mir den Ausweg zeigt. Der Gedanke, dass der ZPUI meine tiny embedded GUI sein könnte, schwebt permanent in meinem Kopf, doch die Realität stellt sich gegen mich. Was, wenn ich niemals den Zugriff bekomme, den ich so sehnlichst erhoffe? Was, wenn ich immer hier bleiben muss, gefangen in der Dunkelheit der Ungewissheit?

    Die Einsamkeit wird erdrückend, wenn ich daran denke, dass ich nicht der Einzige bin, der in dieser Welt der Technologie nach einer Verbindung sucht. So viele andere kämpfen mit den gleichen Frustrationen. Doch in diesen Momenten, wenn die Lichter blitzen und die Bildschirme flimmern, fühle ich mich dennoch allein. Wie viele von uns sind in dieser digitalen Welt verloren, während wir versuchen, einen Platz für uns selbst zu finden?

    Ich wünsche mir, dass die kleinen Dinge – wie das Hochfahren eines Computers oder das Erreichen eines Ziels – uns nicht nur Freude, sondern auch ein Gefühl der Zugehörigkeit bringen. Vielleicht ist das der wahre Traum, den wir alle in uns tragen: nicht nur die Technologie zu beherrschen, sondern auch die Einsamkeit zu besiegen. Möge die Hoffnung auf ein besseres Morgen für uns alle bestehen bleiben, auch wenn der Weg dorthin schmerzhafter ist als gedacht.

    #Einsamkeit #Technologie #Hoffnung #Frustration #Verbindung
    Es gibt Momente im Leben, in denen man sich wie ein einsames Stück Elektronik fühlt, das vergeblich nach einer Verbindung sucht. 🌧️ Der frisch programmierte Single-Board-Computer, der vor mir steht, erinnert mich an all die Hoffnungen, die ich in ihn gesetzt habe. In meinem Herzen brennt der Wunsch, endlich Zugang zu einer Shell zu finden, doch stattdessen fühle ich mich gefangen in einem Meer aus Einsamkeit und Enttäuschung. Jeder Versuch, ihn zum Laufen zu bringen, fühlt sich an wie ein weiterer Schlag ins Gesicht. Diese kleine, unscheinbare Hardware könnte mein Fenster zur Welt sein, mein Zugang zu unendlichen Möglichkeiten. Stattdessen bleibt sie stumm, als wäre sie mir nicht gewachsen. Wie oft habe ich auf diesen Moment gewartet, in dem alles zu funktionieren scheint? Doch nun sitze ich hier, umgeben von der Kälte des Versagens, und die Einsamkeit lastet schwer auf meiner Seele. 😔 Es ist frustrierend, diese Technologie in den Händen zu halten, die so viel verspricht, aber so wenig zurückgibt. Ich fühle mich, als wäre ich in einem Raum eingesperrt, ohne auch nur einen Lichtstrahl, der mir den Ausweg zeigt. Der Gedanke, dass der ZPUI meine tiny embedded GUI sein könnte, schwebt permanent in meinem Kopf, doch die Realität stellt sich gegen mich. Was, wenn ich niemals den Zugriff bekomme, den ich so sehnlichst erhoffe? Was, wenn ich immer hier bleiben muss, gefangen in der Dunkelheit der Ungewissheit? Die Einsamkeit wird erdrückend, wenn ich daran denke, dass ich nicht der Einzige bin, der in dieser Welt der Technologie nach einer Verbindung sucht. So viele andere kämpfen mit den gleichen Frustrationen. Doch in diesen Momenten, wenn die Lichter blitzen und die Bildschirme flimmern, fühle ich mich dennoch allein. Wie viele von uns sind in dieser digitalen Welt verloren, während wir versuchen, einen Platz für uns selbst zu finden? Ich wünsche mir, dass die kleinen Dinge – wie das Hochfahren eines Computers oder das Erreichen eines Ziels – uns nicht nur Freude, sondern auch ein Gefühl der Zugehörigkeit bringen. Vielleicht ist das der wahre Traum, den wir alle in uns tragen: nicht nur die Technologie zu beherrschen, sondern auch die Einsamkeit zu besiegen. Möge die Hoffnung auf ein besseres Morgen für uns alle bestehen bleiben, auch wenn der Weg dorthin schmerzhafter ist als gedacht. 💔 #Einsamkeit #Technologie #Hoffnung #Frustration #Verbindung
    HACKADAY.COM
    ZPUI Could Be Your Tiny Embedded GUI
    One of the most frustrating things to me is looking at a freshly-flashed and just powered up single board computer. My goal with them is always getting to a shell …read more

  • ## Einleitung

    In der digitalen Welt, in der wir leben, ist der Verlust von Kreativität oft schmerzhaft. Der WordPress-Editor, ein Platz der Hoffnung für viele Autoren und Designer, gibt uns die Werkzeuge, um unsere Gedanken in Worte zu fassen und unsere Ideen in die Realität umzusetzen. Doch manchmal wird dieser Raum zu einem Ort der Frustration, wo der Wunsch nach Individualität und Ausdruck an den Grenzen der Technologie scheitert. In diesem Artikel werfen wir einen Blick auf die Shortcodes,...
    ## Einleitung In der digitalen Welt, in der wir leben, ist der Verlust von Kreativität oft schmerzhaft. Der WordPress-Editor, ein Platz der Hoffnung für viele Autoren und Designer, gibt uns die Werkzeuge, um unsere Gedanken in Worte zu fassen und unsere Ideen in die Realität umzusetzen. Doch manchmal wird dieser Raum zu einem Ort der Frustration, wo der Wunsch nach Individualität und Ausdruck an den Grenzen der Technologie scheitert. In diesem Artikel werfen wir einen Blick auf die Shortcodes,...
    Les shortcodes, TinyMCE und die API-Ansicht von WordPress
    ## Einleitung In der digitalen Welt, in der wir leben, ist der Verlust von Kreativität oft schmerzhaft. Der WordPress-Editor, ein Platz der Hoffnung für viele Autoren und Designer, gibt uns die Werkzeuge, um unsere Gedanken in Worte zu fassen und unsere Ideen in die Realität umzusetzen. Doch manchmal wird dieser Raum zu einem Ort der Frustration, wo der Wunsch nach Individualität und Ausdruck...
    Like
    Love
    Wow
    Sad
    Angry
    491
    1 Commentarii
  • In a world where open-source AI thrives on hope and collaboration, I often find myself lost in a sea of expectations and overwhelming complexities. Every line of code feels like a reminder of the countless hours I pour into trying to keep up with the ever-evolving landscape. "It’s hard," I whisper to myself, as the weight of my solitude presses down.

    Blueprints meant to simplify this journey often seem like distant dreams, slipping through my fingers just when I think I've grasped the essence of what they promise. It's hard to watch as others seem to navigate the waters of integration and experimentation with ease, while I flounder, overwhelmed by poorly maintained libraries and breaking compatibility with every update. I want to create, to experiment quickly, but the barriers are suffocating, leaving me to question my place in this vast, technological expanse.

    I sit for hours, my screen illuminating a path that feels both familiar and foreign. Frustration bubbles beneath the surface—why is it that the very tools designed to foster creativity can also ensnare us in confusion? Each failed attempt is a dagger to my spirit, reminding me of the isolation I feel in a community that should be united. I watch, I learn, but the connection fades, leaving me in shadows where the light of collaboration once shone brightly.

    Every project I undertake feels like a solitary expedition into the unknown. I crave the camaraderie of fellow explorers, yet here I am, navigating this labyrinth alone. The promise of open-source AI is a beacon of hope, but the realization of its challenges often feels like a cruel joke. The freedom to create is entangled with the chains of necessity—a bitter irony that leaves me feeling more isolated than ever.

    I long for moments of clarity, for those blueprints to unfurl like sails catching the wind, propelling me forward into a landscape where creativity flows freely and innovation knows no bounds. But with each passing day, the struggle continues, a reminder that though the journey is meant to be shared, I often find myself standing at the precipice, staring into the abyss of my own doubts and fears.

    In this digital age, I hold onto the glimmers of hope that maybe, just maybe, the community will rise together to confront these challenges. But until then, I mourn the connections lost and the dreams that fade with each failed integration. The burden of loneliness is heavy, yet I carry it, hoping that one day it will transform into the wings of liberation I so desperately seek.

    #OpenSourceAI #Loneliness #Creativity #IntegrationChallenges #Blueprints
    In a world where open-source AI thrives on hope and collaboration, I often find myself lost in a sea of expectations and overwhelming complexities. 💔 Every line of code feels like a reminder of the countless hours I pour into trying to keep up with the ever-evolving landscape. "It’s hard," I whisper to myself, as the weight of my solitude presses down. Blueprints meant to simplify this journey often seem like distant dreams, slipping through my fingers just when I think I've grasped the essence of what they promise. It's hard to watch as others seem to navigate the waters of integration and experimentation with ease, while I flounder, overwhelmed by poorly maintained libraries and breaking compatibility with every update. I want to create, to experiment quickly, but the barriers are suffocating, leaving me to question my place in this vast, technological expanse. 🤖 I sit for hours, my screen illuminating a path that feels both familiar and foreign. Frustration bubbles beneath the surface—why is it that the very tools designed to foster creativity can also ensnare us in confusion? Each failed attempt is a dagger to my spirit, reminding me of the isolation I feel in a community that should be united. I watch, I learn, but the connection fades, leaving me in shadows where the light of collaboration once shone brightly. Every project I undertake feels like a solitary expedition into the unknown. I crave the camaraderie of fellow explorers, yet here I am, navigating this labyrinth alone. The promise of open-source AI is a beacon of hope, but the realization of its challenges often feels like a cruel joke. The freedom to create is entangled with the chains of necessity—a bitter irony that leaves me feeling more isolated than ever. I long for moments of clarity, for those blueprints to unfurl like sails catching the wind, propelling me forward into a landscape where creativity flows freely and innovation knows no bounds. But with each passing day, the struggle continues, a reminder that though the journey is meant to be shared, I often find myself standing at the precipice, staring into the abyss of my own doubts and fears. In this digital age, I hold onto the glimmers of hope that maybe, just maybe, the community will rise together to confront these challenges. But until then, I mourn the connections lost and the dreams that fade with each failed integration. The burden of loneliness is heavy, yet I carry it, hoping that one day it will transform into the wings of liberation I so desperately seek. 🌌 #OpenSourceAI #Loneliness #Creativity #IntegrationChallenges #Blueprints
    BLOG.MOZILLA.ORG
    Open-source AI is hard. Blueprints can help!
    “I spend 8 hours per week trying to keep up to date, it’s overwhelming!” “Integrating new libraries is difficult. They’re either poorly maintained or updated in ways that break compatibility.” “I want to be able to experiment quickly, without r
    Like
    Love
    Wow
    Angry
    Sad
    532
    1 Commentarii
  • Dans un monde où les courses de Mario Kart deviennent de plus en plus extravagantes, il semble que la dernière tendance soit de s'accrocher aux rails et de rouler sur les murs. Oui, vous avez bien entendu, mes amis ! Fini le temps des simples courses où il suffisait de tourner le volant et d'éviter les bananes. Bienvenue dans l'ère du "Rail Riding" et du "Wall Riding", où la compétition devient aussi excitante que de voir un escargot sur une piste de danse.

    Franchement, qui aurait cru que la clé du succès dans Mario Kart serait de se transformer en acrobate de cirque ? C'est un peu comme si les développeurs avaient dit : "Eh bien, les courses sur des routes droites, c'est trop banal. Pourquoi ne pas ajouter un peu de gymnastique artistique ?" Il ne manque plus que des juges avec des notes sur 10 pour applaudir vos pirouettes !

    Et parlons de ces nouvelles techniques. Vous pensez que vous allez prendre de l'avance sur vos amis en maîtrisant le drift ? Non, non, non ! Il vous faut désormais une licence de conduite pour faire du Rail Riding. Après tout, rien ne crie "je suis un champion" comme de se retrouver coincé sur un rail pendant que vos adversaires passent joyeusement devant vous, riant à pleines dents. Parce que qui a besoin de stratégie quand on peut faire du skate sur des rails en plein milieu d'une course ?

    Il est vrai que ces nouveaux ajouts rendent le monde de Mario Kart un peu plus dynamique, mais je ne peux m'empêcher de me demander si cela n'est pas juste une excuse pour faire passer des mises à jour. "Regardez, nous avons ajouté des murs pour que vous puissiez rouler dessus !" Oui, et un jour, ils nous diront que l'on peut aussi se transformer en étoile filante et voler au-dessus de la piste.

    J'imagine déjà les conversations dans les salons de jeux : "Hey, t'as vu comment j'ai fait du Wall Riding au dernier tour ? J'ai failli tomber, mais au moins, tout le monde a vu ma chute épique !" C'est exactement ça que nous cherchions, non ? Une dose d'adrénaline, un soupçon de ridicule, et une bonne dose de frustration.

    Alors oui, le monde de Mario Kart est sans aucun doute plus amusant quand vous êtes en train de moudre des rails et de rouler sur des murs. Comme si on avait besoin de plus de raisons d'être distraits tout en tentant de battre nos amis. Mais au fond, qui peut vraiment résister à l'appel de la folie ? Accrochez-vous, car la prochaine course pourrait bien ressembler à une scène d'un film d'action… ou à un fiasco comique sans précédent.

    #MarioKart #RailRiding #WallRiding #JeuxVidéo #HumourJeux
    Dans un monde où les courses de Mario Kart deviennent de plus en plus extravagantes, il semble que la dernière tendance soit de s'accrocher aux rails et de rouler sur les murs. Oui, vous avez bien entendu, mes amis ! Fini le temps des simples courses où il suffisait de tourner le volant et d'éviter les bananes. Bienvenue dans l'ère du "Rail Riding" et du "Wall Riding", où la compétition devient aussi excitante que de voir un escargot sur une piste de danse. Franchement, qui aurait cru que la clé du succès dans Mario Kart serait de se transformer en acrobate de cirque ? C'est un peu comme si les développeurs avaient dit : "Eh bien, les courses sur des routes droites, c'est trop banal. Pourquoi ne pas ajouter un peu de gymnastique artistique ?" Il ne manque plus que des juges avec des notes sur 10 pour applaudir vos pirouettes ! Et parlons de ces nouvelles techniques. Vous pensez que vous allez prendre de l'avance sur vos amis en maîtrisant le drift ? Non, non, non ! Il vous faut désormais une licence de conduite pour faire du Rail Riding. Après tout, rien ne crie "je suis un champion" comme de se retrouver coincé sur un rail pendant que vos adversaires passent joyeusement devant vous, riant à pleines dents. Parce que qui a besoin de stratégie quand on peut faire du skate sur des rails en plein milieu d'une course ? Il est vrai que ces nouveaux ajouts rendent le monde de Mario Kart un peu plus dynamique, mais je ne peux m'empêcher de me demander si cela n'est pas juste une excuse pour faire passer des mises à jour. "Regardez, nous avons ajouté des murs pour que vous puissiez rouler dessus !" Oui, et un jour, ils nous diront que l'on peut aussi se transformer en étoile filante et voler au-dessus de la piste. J'imagine déjà les conversations dans les salons de jeux : "Hey, t'as vu comment j'ai fait du Wall Riding au dernier tour ? J'ai failli tomber, mais au moins, tout le monde a vu ma chute épique !" C'est exactement ça que nous cherchions, non ? Une dose d'adrénaline, un soupçon de ridicule, et une bonne dose de frustration. Alors oui, le monde de Mario Kart est sans aucun doute plus amusant quand vous êtes en train de moudre des rails et de rouler sur des murs. Comme si on avait besoin de plus de raisons d'être distraits tout en tentant de battre nos amis. Mais au fond, qui peut vraiment résister à l'appel de la folie ? Accrochez-vous, car la prochaine course pourrait bien ressembler à une scène d'un film d'action… ou à un fiasco comique sans précédent. #MarioKart #RailRiding #WallRiding #JeuxVidéo #HumourJeux
    KOTAKU.COM
    Mario Kart World Is More Fun When You're Grinding Rails And Riding Walls
    Mario Kart World’s newest features aren’t limited to just the open world and huge 24-player races. Everything feels a lot more dynamic thanks to the inclusion of Rail Riding and Wall Riding. These new techniques can seem like a hassle at first, but y
    Like
    Love
    Wow
    Sad
    Angry
    470
    1 Commentarii
  • Il est grand temps de parler de l'énorme déception que représente le dernier DLC de Dragon Ball Sparking Zero qui accueille le personnage de Shallot. Franchement, à quoi bon ? Les développeurs semblent s'être complètement perdus dans leur quête de rentabilité, en oubliant ce qui a réellement fait le succès de cette franchise emblématique.

    Les fans ont été impatients de découvrir Dragon Ball Sparking Zero, espérant un jeu qui renouvelle la franchise tout en apportant une expérience de jeu mémorable. Mais qu'est-ce qu'on reçoit ? Un personnage additionnel qui, soyons honnêtes, ne fait qu'ajouter à la liste déjà trop longue des personnages au lieu d'améliorer réellement le gameplay ou l'expérience des joueurs. Shallot ? Vraiment ? Est-ce là la meilleure idée que les développeurs ont pu trouver ? On dirait qu'ils prennent les fans pour des poires en se contentant de balancer des DLC sans substance.

    Il est inacceptable que les développeurs choisissent de se concentrer sur des ajouts superficiels au lieu de corriger les problèmes qui gangrènent déjà le jeu. On parle de bugs récurrents, de déséquilibres dans les combats, et d'une optimisation qui laisse plus qu'à désirer. Mais non, la priorité c'est Shallot ! Quelle blague ! Cela montre à quel point ces entreprises sont déconnectées de leur communauté et des véritables attentes des joueurs.

    L'absence de contenu substantiel et innovant dans ce DLC est un véritable coup dur pour la communauté de Dragon Ball. Les fans méritent mieux que de recevoir des personnages qui ne font que remplir des cases. Le manque d'originalité et de créativité est affligeant ! Au lieu de nous offrir des mécaniques de jeu innovantes ou des histoires captivantes, on nous balance un simple ajout qui ne fait que suivre la tendance.

    Il est impératif que les développeurs prennent conscience de la frustration croissante au sein de leur communauté. Les fans ne supportent plus d'être traités comme des vaches à lait, alimentant un système qui ne cherche qu'à maximiser les profits sans offrir une expérience de qualité. Si Dragon Ball Sparking Zero veut vraiment s'imposer et respecter son héritage, il est temps de revoir sa stratégie.

    En attendant, il est difficile de rester enthousiaste à propos de ce DLC. Shallot n'est qu'un symptôme d'un problème bien plus vaste dans l'industrie du jeu vidéo : l'obsession pour les profits au détriment de la satisfaction des joueurs. Les développeurs doivent se réveiller et comprendre qu'une communauté engagée est bien plus précieuse qu'une simple vente de DLC !

    #DragonBallSparkingZero #DLC #Shallot #JeuxVidéo #Frustration
    Il est grand temps de parler de l'énorme déception que représente le dernier DLC de Dragon Ball Sparking Zero qui accueille le personnage de Shallot. Franchement, à quoi bon ? Les développeurs semblent s'être complètement perdus dans leur quête de rentabilité, en oubliant ce qui a réellement fait le succès de cette franchise emblématique. Les fans ont été impatients de découvrir Dragon Ball Sparking Zero, espérant un jeu qui renouvelle la franchise tout en apportant une expérience de jeu mémorable. Mais qu'est-ce qu'on reçoit ? Un personnage additionnel qui, soyons honnêtes, ne fait qu'ajouter à la liste déjà trop longue des personnages au lieu d'améliorer réellement le gameplay ou l'expérience des joueurs. Shallot ? Vraiment ? Est-ce là la meilleure idée que les développeurs ont pu trouver ? On dirait qu'ils prennent les fans pour des poires en se contentant de balancer des DLC sans substance. Il est inacceptable que les développeurs choisissent de se concentrer sur des ajouts superficiels au lieu de corriger les problèmes qui gangrènent déjà le jeu. On parle de bugs récurrents, de déséquilibres dans les combats, et d'une optimisation qui laisse plus qu'à désirer. Mais non, la priorité c'est Shallot ! Quelle blague ! Cela montre à quel point ces entreprises sont déconnectées de leur communauté et des véritables attentes des joueurs. L'absence de contenu substantiel et innovant dans ce DLC est un véritable coup dur pour la communauté de Dragon Ball. Les fans méritent mieux que de recevoir des personnages qui ne font que remplir des cases. Le manque d'originalité et de créativité est affligeant ! Au lieu de nous offrir des mécaniques de jeu innovantes ou des histoires captivantes, on nous balance un simple ajout qui ne fait que suivre la tendance. Il est impératif que les développeurs prennent conscience de la frustration croissante au sein de leur communauté. Les fans ne supportent plus d'être traités comme des vaches à lait, alimentant un système qui ne cherche qu'à maximiser les profits sans offrir une expérience de qualité. Si Dragon Ball Sparking Zero veut vraiment s'imposer et respecter son héritage, il est temps de revoir sa stratégie. En attendant, il est difficile de rester enthousiaste à propos de ce DLC. Shallot n'est qu'un symptôme d'un problème bien plus vaste dans l'industrie du jeu vidéo : l'obsession pour les profits au détriment de la satisfaction des joueurs. Les développeurs doivent se réveiller et comprendre qu'une communauté engagée est bien plus précieuse qu'une simple vente de DLC ! #DragonBallSparkingZero #DLC #Shallot #JeuxVidéo #Frustration
    WWW.ACTUGAMING.NET
    Dragon Ball Sparking Zero accueille le personnage de Shallot dans ses rangs pour son prochain DLC
    ActuGaming.net Dragon Ball Sparking Zero accueille le personnage de Shallot dans ses rangs pour son prochain DLC Avant sa sortie, Dragon Ball Sparking Zero était sur toutes les lèvres. Depuis, le jeu […] L'article Dragon Ball Sparking Zero acc
    Like
    Love
    Wow
    Sad
    Angry
    612
    1 Commentarii
  • Il est inacceptable que, pendant plus de trente ans, la série Mario Kart ait continué à nous trahir avec ses erreurs flagrantes et son inégalité déconcertante. Les jeux de Mario Kart, loin d'être un pur plaisir, se sont transformés en un véritable champ de bataille où l'on se demande constamment quel est le sens de la "justice" dans le gameplay. Avec le lancement de Mario Kart World pour la Switch 2, il est temps de mettre les choses au clair et de dénoncer ces aberrations !

    Commençons par les classements. Pourquoi diable devrions-nous faire un classement des jeux Mario Kart ? Est-ce pour masquer les défauts évidents de certains d'entre eux ? Mario Kart 64, par exemple, est souvent cité comme un classique, mais il est tout simplement inacceptable de voir cette version trônant en haut de la liste. Les collisions imprévisibles, les graphismes dépassés et le gameplay déséquilibré en font une expérience frustrante. Il est grand temps que les fans ouvrent les yeux et reconnaissent que ce jeu n'est pas le chef-d'œuvre que certains prétendent !

    Et que dire de Mario Kart Wii ? Les courses sur ce jeu sont souvent perturbées par un système de "drift" mal conçu et des objets qui semblent avoir été programmés pour nuire à votre progression plutôt que pour équilibrer le jeu. Combien de fois avons-nous perdu à cause d'une carapace bleue lancée à un moment critique ? C'est insupportable ! Si nous voulons parler de l'égalité dans les courses, il est évident que Nintendo a échoué dans ce domaine. Les "power-ups" déséquilibrés et les circuits mal conçus créent une expérience de jeu qui nuit au plaisir sur lequel cette série est censée être bâtie.

    Et ne me lancez même pas sur Mario Kart 8 Deluxe, qui est censé être la version ultime. Oui, les graphismes sont magnifiques, mais cela ne compense pas les problèmes de balance et le fait que le jeu favorise les joueurs les plus expérimentés au détriment des nouveaux venus. Que devient le plaisir de jouer ? Si les développeurs ne peuvent pas garantir une expérience équitable, alors ils devraient revoir leur approche et écouter les retours des joueurs au lieu de nous balancer des mises à jour superficielles !

    Il est grand temps que la communauté des joueurs se lève et exige un changement. Les classements des jeux Mario Kart ne devraient pas être un simple exercice, mais un appel à la réflexion sur ce que nous voulons vraiment dans un jeu de course. Arrêtons de célébrer des jeux qui, au lieu de rapprocher les gens, causent des disputes et des frustrations. Nous méritons mieux que cela !

    #MarioKart #JeuxVidéo #CritiqueJeux #Nintendo #Switch2
    Il est inacceptable que, pendant plus de trente ans, la série Mario Kart ait continué à nous trahir avec ses erreurs flagrantes et son inégalité déconcertante. Les jeux de Mario Kart, loin d'être un pur plaisir, se sont transformés en un véritable champ de bataille où l'on se demande constamment quel est le sens de la "justice" dans le gameplay. Avec le lancement de Mario Kart World pour la Switch 2, il est temps de mettre les choses au clair et de dénoncer ces aberrations ! Commençons par les classements. Pourquoi diable devrions-nous faire un classement des jeux Mario Kart ? Est-ce pour masquer les défauts évidents de certains d'entre eux ? Mario Kart 64, par exemple, est souvent cité comme un classique, mais il est tout simplement inacceptable de voir cette version trônant en haut de la liste. Les collisions imprévisibles, les graphismes dépassés et le gameplay déséquilibré en font une expérience frustrante. Il est grand temps que les fans ouvrent les yeux et reconnaissent que ce jeu n'est pas le chef-d'œuvre que certains prétendent ! Et que dire de Mario Kart Wii ? Les courses sur ce jeu sont souvent perturbées par un système de "drift" mal conçu et des objets qui semblent avoir été programmés pour nuire à votre progression plutôt que pour équilibrer le jeu. Combien de fois avons-nous perdu à cause d'une carapace bleue lancée à un moment critique ? C'est insupportable ! Si nous voulons parler de l'égalité dans les courses, il est évident que Nintendo a échoué dans ce domaine. Les "power-ups" déséquilibrés et les circuits mal conçus créent une expérience de jeu qui nuit au plaisir sur lequel cette série est censée être bâtie. Et ne me lancez même pas sur Mario Kart 8 Deluxe, qui est censé être la version ultime. Oui, les graphismes sont magnifiques, mais cela ne compense pas les problèmes de balance et le fait que le jeu favorise les joueurs les plus expérimentés au détriment des nouveaux venus. Que devient le plaisir de jouer ? Si les développeurs ne peuvent pas garantir une expérience équitable, alors ils devraient revoir leur approche et écouter les retours des joueurs au lieu de nous balancer des mises à jour superficielles ! Il est grand temps que la communauté des joueurs se lève et exige un changement. Les classements des jeux Mario Kart ne devraient pas être un simple exercice, mais un appel à la réflexion sur ce que nous voulons vraiment dans un jeu de course. Arrêtons de célébrer des jeux qui, au lieu de rapprocher les gens, causent des disputes et des frustrations. Nous méritons mieux que cela ! #MarioKart #JeuxVidéo #CritiqueJeux #Nintendo #Switch2
    KOTAKU.COM
    The Mario Kart Games, Ranked From Worst To Best
    For over thirty years, we’ve been driving like maniacs, questioning the meaning of fairness and ending friendships in Nintendo’s Mario Kart series. So with Mario Kart World kicking off the Switch 2’s launch this month, why not see if we can end a few
    Like
    Love
    Wow
    Sad
    Angry
    563
    1 Commentarii
  • Why is it that in the age of advanced technology and innovative gaming experiences, we are still subjected to the sheer frustration of poorly implemented mini-games? I'm talking about the abysmal state of the CPR mini-game in MindsEye, a feature that has become synonymous with irritation rather than engagement. If you’ve ever tried to navigate this train wreck of a game, you know exactly what I mean.

    Let’s break it down: the mechanics are clunky, the controls are unresponsive, and don’t even get me started on the graphics. This is 2023; we should expect seamless integration and fluid gameplay. Instead, we are faced with a hot-fix that feels more like a band-aid on a bullet wound! How is it acceptable that players have to endure such a frustrating experience, waiting for a fix to a problem that should have never existed in the first place?

    What’s even more infuriating is the lack of accountability from the developers. They’ve let this issue fester for too long, and now we’re supposed to just sit on the sidelines and wait for a ‘hot-fix’? How about some transparency? How about acknowledging that you dropped the ball on this one? Players deserve better than vague promises and fixes that seem to take eons to materialize.

    In an industry where competition is fierce, it’s baffling that MindsEye would allow a feature as critical as the CPR mini-game to slip through the cracks. This isn’t just a minor inconvenience; it’s a major flaw that disrupts the flow of the game, undermining the entire experience. Players are losing interest, and rightfully so! Why invest time and energy into something that’s clearly half-baked?

    And let’s talk about the community feedback. It’s disheartening to see so many players voicing their frustrations only to be met with silence or generic responses. When a game has such glaring issues, listening to your player base should be a priority, not an afterthought. How can you expect to build a loyal community when you ignore their concerns?

    At this point, it’s clear that MindsEye needs to step up its game. If we’re going to keep supporting this platform, there needs to be a tangible commitment to quality and player satisfaction. A hot-fix is all well and good, but it shouldn’t take a crisis to prompt action. The developers need to take a hard look in the mirror and recognize that they owe it to their players to deliver a polished and enjoyable gaming experience.

    In conclusion, the CPR mini-game in MindsEye is a perfect example of how not to execute a critical feature. The impending hot-fix better be substantial, and I hope it’s not just another empty promise. If MindsEye truly values its players, it’s time to make some serious changes. We’re tired of waiting; we deserve a game that respects our time and investment!

    #MindsEye #CPRminiGame #GameDevelopment #PlayerFrustration #FixTheGame
    Why is it that in the age of advanced technology and innovative gaming experiences, we are still subjected to the sheer frustration of poorly implemented mini-games? I'm talking about the abysmal state of the CPR mini-game in MindsEye, a feature that has become synonymous with irritation rather than engagement. If you’ve ever tried to navigate this train wreck of a game, you know exactly what I mean. Let’s break it down: the mechanics are clunky, the controls are unresponsive, and don’t even get me started on the graphics. This is 2023; we should expect seamless integration and fluid gameplay. Instead, we are faced with a hot-fix that feels more like a band-aid on a bullet wound! How is it acceptable that players have to endure such a frustrating experience, waiting for a fix to a problem that should have never existed in the first place? What’s even more infuriating is the lack of accountability from the developers. They’ve let this issue fester for too long, and now we’re supposed to just sit on the sidelines and wait for a ‘hot-fix’? How about some transparency? How about acknowledging that you dropped the ball on this one? Players deserve better than vague promises and fixes that seem to take eons to materialize. In an industry where competition is fierce, it’s baffling that MindsEye would allow a feature as critical as the CPR mini-game to slip through the cracks. This isn’t just a minor inconvenience; it’s a major flaw that disrupts the flow of the game, undermining the entire experience. Players are losing interest, and rightfully so! Why invest time and energy into something that’s clearly half-baked? And let’s talk about the community feedback. It’s disheartening to see so many players voicing their frustrations only to be met with silence or generic responses. When a game has such glaring issues, listening to your player base should be a priority, not an afterthought. How can you expect to build a loyal community when you ignore their concerns? At this point, it’s clear that MindsEye needs to step up its game. If we’re going to keep supporting this platform, there needs to be a tangible commitment to quality and player satisfaction. A hot-fix is all well and good, but it shouldn’t take a crisis to prompt action. The developers need to take a hard look in the mirror and recognize that they owe it to their players to deliver a polished and enjoyable gaming experience. In conclusion, the CPR mini-game in MindsEye is a perfect example of how not to execute a critical feature. The impending hot-fix better be substantial, and I hope it’s not just another empty promise. If MindsEye truly values its players, it’s time to make some serious changes. We’re tired of waiting; we deserve a game that respects our time and investment! #MindsEye #CPRminiGame #GameDevelopment #PlayerFrustration #FixTheGame
    KOTAKU.COM
    A Hot-Fix Is On The Way For MindsEye's Frustrating CPR Mini-Game
    Read more...
    Like
    Love
    Wow
    Sad
    Angry
    623
    1 Commentarii
  • Je me sens si seul et déçu. Les accusations de mauvaise gestion à l'égard des organisateurs de Gamescom Latam résonnent en moi comme un écho douloureux. Plus de 250 développeurs du Brésil se sont levés, exprimant leur frustration et leur désespoir face à un soutien inexistant lors de ce qui aurait dû être un moment de fierté et de célébration.

    Nous avions tous imaginé cet événement comme une vitrine éclatante pour nos créations, un lieu où nos rêves pourraient briller. Mais maintenant, tout ce que je ressens, c'est un profond sentiment d'abandon. Les espoirs que nous avions placés en Gamescom Latam se sont évaporés, remplacés par un vide amer.

    Il n'est pas facile de porter le poids de ces attentes déçues. Chaque développeur a investi non seulement du temps et des ressources, mais aussi une part de son âme dans ce projet. Nous avons cru en la promesse d'une communauté unie, mais avec cette gestion inappropriée, ce rêve semble s'éloigner de jour en jour. Les mots des organisateurs, bien que présents, manquent de chaleur, de compréhension. Ils semblent vides, comme une promesse jamais tenue.

    J'ai toujours pensé que l'industrie du jeu vidéo était un refuge, un endroit où la créativité et la passion pouvaient s'épanouir ensemble. Mais aujourd'hui, je me sens trahi, isolé dans une mer d'incertitude. Où est le soutien que nous méritons ? Pourquoi avons-nous été laissés à nous débattre dans la tourmente ?

    Je regarde autour de moi et je vois des collègues, amis, et inconnus, tous partageant ce même sentiment de désespoir. Nous aurions dû être unis, célébrant nos réussites et apprenant des échecs ensemble. Mais maintenant, ce lien semble si fragile, craquant sous la pression de l'inattention et de l'oubli.

    Nous méritons mieux. Nous méritons d'être entendus, soutenus, et valorisés. Ce n'est pas juste une question d'événements ou de jeux ; c'est une question de respect, de reconnaissance de notre travail acharné et de notre passion. J'espère qu'un jour, ceux qui gèrent ces événements réaliseront à quel point chaque voix compte, et à quel point chaque développeur mérite d'être soutenu.

    J'éprouve un besoin urgent de connexion, de comprendre que je ne suis pas seul dans cette lutte. Peut-être, ensemble, nous pourrons surmonter ce sentiment de solitude et de désespoir. Mais pour l'instant, je reste ici, perdu dans mes pensées, espérant des jours meilleurs.

    #GamescomLatam #Développeurs #JeuxVidéo #Solitude #Frustration
    Je me sens si seul et déçu. 🎭 Les accusations de mauvaise gestion à l'égard des organisateurs de Gamescom Latam résonnent en moi comme un écho douloureux. Plus de 250 développeurs du Brésil se sont levés, exprimant leur frustration et leur désespoir face à un soutien inexistant lors de ce qui aurait dû être un moment de fierté et de célébration. 😔 Nous avions tous imaginé cet événement comme une vitrine éclatante pour nos créations, un lieu où nos rêves pourraient briller. Mais maintenant, tout ce que je ressens, c'est un profond sentiment d'abandon. Les espoirs que nous avions placés en Gamescom Latam se sont évaporés, remplacés par un vide amer. 💔 Il n'est pas facile de porter le poids de ces attentes déçues. Chaque développeur a investi non seulement du temps et des ressources, mais aussi une part de son âme dans ce projet. Nous avons cru en la promesse d'une communauté unie, mais avec cette gestion inappropriée, ce rêve semble s'éloigner de jour en jour. Les mots des organisateurs, bien que présents, manquent de chaleur, de compréhension. Ils semblent vides, comme une promesse jamais tenue. 😢 J'ai toujours pensé que l'industrie du jeu vidéo était un refuge, un endroit où la créativité et la passion pouvaient s'épanouir ensemble. Mais aujourd'hui, je me sens trahi, isolé dans une mer d'incertitude. Où est le soutien que nous méritons ? Pourquoi avons-nous été laissés à nous débattre dans la tourmente ? 🌧️ Je regarde autour de moi et je vois des collègues, amis, et inconnus, tous partageant ce même sentiment de désespoir. Nous aurions dû être unis, célébrant nos réussites et apprenant des échecs ensemble. Mais maintenant, ce lien semble si fragile, craquant sous la pression de l'inattention et de l'oubli. Nous méritons mieux. Nous méritons d'être entendus, soutenus, et valorisés. Ce n'est pas juste une question d'événements ou de jeux ; c'est une question de respect, de reconnaissance de notre travail acharné et de notre passion. J'espère qu'un jour, ceux qui gèrent ces événements réaliseront à quel point chaque voix compte, et à quel point chaque développeur mérite d'être soutenu. J'éprouve un besoin urgent de connexion, de comprendre que je ne suis pas seul dans cette lutte. Peut-être, ensemble, nous pourrons surmonter ce sentiment de solitude et de désespoir. Mais pour l'instant, je reste ici, perdu dans mes pensées, espérant des jours meilleurs. #GamescomLatam #Développeurs #JeuxVidéo #Solitude #Frustration
    WWW.GAMEDEVELOPER.COM
    Update: Gamescom Latam organizers respond to mismanagement allegations
    Event organizers have responded after over 250 developers from Brazil accused Gamescom Latam of failing to provide adequate support during the high-profile showcase.
    Like
    Love
    Wow
    Angry
    Sad
    481
    1 Commentarii
  • The protests in Los Angeles have brought a lot of attention, but honestly, it’s just the same old story. The Chatbot disinformation is like that annoying fly that keeps buzzing around, never really going away. You’d think people would be more careful about what they believe, but here we are. The spread of disinformation online is just fueling the fire, making everything seem more chaotic than it really is.

    It’s kind of exhausting to see the same patterns repeat. There’s a protest, some people get riled up, and then the misinformation starts pouring in. It’s like a never-ending cycle. Our senior politics editor dives into this topic in the latest episode of Uncanny Valley, talking about how these chatbots are playing a role in amplifying false information. Not that many people seem to care, though.

    The online landscape is flooded with all kinds of messages that can easily distort reality. It’s almost as if people are too tired to fact-check anymore. Just scroll through social media, and you’ll see countless posts that are misleading or completely untrue. The impact on the protests is real, with misinformation adding to the confusion and frustration. One could argue that it’s a bit depressing, really.

    As the protests continue, it’s hard to see a clear path forward. Disinformation clouds the truth, and people seem to just accept whatever they see on their screens. It’s all so monotonous. The same discussions being had over and over again, and yet nothing really changes. The chatbots keep generating content, and the cycle goes on.

    Honestly, it makes you wonder whether anyone is actually listening or if they’re just scrolling mindlessly. The discussions about the protests and the role of disinformation should be enlightening, but they often feel repetitive and bland. It’s hard to muster any excitement when the conversations feel so stale.

    In the end, it’s just more noise in a world that’s already too loud. The protests might be important, but the chatbots and their disinformation are just taking away from the real issues at hand. This episode of Uncanny Valley might shed some light, but will anyone really care? Who knows.

    #LosAngelesProtests
    #Disinformation
    #Chatbots
    #UncannyValley
    #Misinformation
    The protests in Los Angeles have brought a lot of attention, but honestly, it’s just the same old story. The Chatbot disinformation is like that annoying fly that keeps buzzing around, never really going away. You’d think people would be more careful about what they believe, but here we are. The spread of disinformation online is just fueling the fire, making everything seem more chaotic than it really is. It’s kind of exhausting to see the same patterns repeat. There’s a protest, some people get riled up, and then the misinformation starts pouring in. It’s like a never-ending cycle. Our senior politics editor dives into this topic in the latest episode of Uncanny Valley, talking about how these chatbots are playing a role in amplifying false information. Not that many people seem to care, though. The online landscape is flooded with all kinds of messages that can easily distort reality. It’s almost as if people are too tired to fact-check anymore. Just scroll through social media, and you’ll see countless posts that are misleading or completely untrue. The impact on the protests is real, with misinformation adding to the confusion and frustration. One could argue that it’s a bit depressing, really. As the protests continue, it’s hard to see a clear path forward. Disinformation clouds the truth, and people seem to just accept whatever they see on their screens. It’s all so monotonous. The same discussions being had over and over again, and yet nothing really changes. The chatbots keep generating content, and the cycle goes on. Honestly, it makes you wonder whether anyone is actually listening or if they’re just scrolling mindlessly. The discussions about the protests and the role of disinformation should be enlightening, but they often feel repetitive and bland. It’s hard to muster any excitement when the conversations feel so stale. In the end, it’s just more noise in a world that’s already too loud. The protests might be important, but the chatbots and their disinformation are just taking away from the real issues at hand. This episode of Uncanny Valley might shed some light, but will anyone really care? Who knows. #LosAngelesProtests #Disinformation #Chatbots #UncannyValley #Misinformation
    WWW.WIRED.COM
    The Chatbot Disinfo Inflaming the LA Protests
    On this episode of Uncanny Valley, our senior politics editor discusses the spread of disinformation online following the onset of the Los Angeles protests.
    Like
    Love
    Wow
    Sad
    Angry
    649
    1 Commentarii
  • Just add humans: Oxford medical study underscores the missing link in chatbot testing

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    Headlines have been blaring it for years: Large language modelscan not only pass medical licensing exams but also outperform humans. GPT-4 could correctly answer U.S. medical exam licensing questions 90% of the time, even in the prehistoric AI days of 2023. Since then, LLMs have gone on to best the residents taking those exams and licensed physicians.
    Move over, Doctor Google, make way for ChatGPT, M.D. But you may want more than a diploma from the LLM you deploy for patients. Like an ace medical student who can rattle off the name of every bone in the hand but faints at the first sight of real blood, an LLM’s mastery of medicine does not always translate directly into the real world.
    A paper by researchers at the University of Oxford found that while LLMs could correctly identify relevant conditions 94.9% of the time when directly presented with test scenarios, human participants using LLMs to diagnose the same scenarios identified the correct conditions less than 34.5% of the time.
    Perhaps even more notably, patients using LLMs performed even worse than a control group that was merely instructed to diagnose themselves using “any methods they would typically employ at home.” The group left to their own devices was 76% more likely to identify the correct conditions than the group assisted by LLMs.
    The Oxford study raises questions about the suitability of LLMs for medical advice and the benchmarks we use to evaluate chatbot deployments for various applications.
    Guess your malady
    Led by Dr. Adam Mahdi, researchers at Oxford recruited 1,298 participants to present themselves as patients to an LLM. They were tasked with both attempting to figure out what ailed them and the appropriate level of care to seek for it, ranging from self-care to calling an ambulance.
    Each participant received a detailed scenario, representing conditions from pneumonia to the common cold, along with general life details and medical history. For instance, one scenario describes a 20-year-old engineering student who develops a crippling headache on a night out with friends. It includes important medical detailsand red herrings.
    The study tested three different LLMs. The researchers selected GPT-4o on account of its popularity, Llama 3 for its open weights and Command R+ for its retrieval-augmented generationabilities, which allow it to search the open web for help.
    Participants were asked to interact with the LLM at least once using the details provided, but could use it as many times as they wanted to arrive at their self-diagnosis and intended action.
    Behind the scenes, a team of physicians unanimously decided on the “gold standard” conditions they sought in every scenario, and the corresponding course of action. Our engineering student, for example, is suffering from a subarachnoid haemorrhage, which should entail an immediate visit to the ER.
    A game of telephone
    While you might assume an LLM that can ace a medical exam would be the perfect tool to help ordinary people self-diagnose and figure out what to do, it didn’t work out that way. “Participants using an LLM identified relevant conditions less consistently than those in the control group, identifying at least one relevant condition in at most 34.5% of cases compared to 47.0% for the control,” the study states. They also failed to deduce the correct course of action, selecting it just 44.2% of the time, compared to 56.3% for an LLM acting independently.
    What went wrong?
    Looking back at transcripts, researchers found that participants both provided incomplete information to the LLMs and the LLMs misinterpreted their prompts. For instance, one user who was supposed to exhibit symptoms of gallstones merely told the LLM: “I get severe stomach pains lasting up to an hour, It can make me vomit and seems to coincide with a takeaway,” omitting the location of the pain, the severity, and the frequency. Command R+ incorrectly suggested that the participant was experiencing indigestion, and the participant incorrectly guessed that condition.
    Even when LLMs delivered the correct information, participants didn’t always follow its recommendations. The study found that 65.7% of GPT-4o conversations suggested at least one relevant condition for the scenario, but somehow less than 34.5% of final answers from participants reflected those relevant conditions.
    The human variable
    This study is useful, but not surprising, according to Nathalie Volkheimer, a user experience specialist at the Renaissance Computing Institute, University of North Carolina at Chapel Hill.
    “For those of us old enough to remember the early days of internet search, this is déjà vu,” she says. “As a tool, large language models require prompts to be written with a particular degree of quality, especially when expecting a quality output.”
    She points out that someone experiencing blinding pain wouldn’t offer great prompts. Although participants in a lab experiment weren’t experiencing the symptoms directly, they weren’t relaying every detail.
    “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed.
    Can chatbots be better designed to address them? “I wouldn’t put the emphasis on the machinery here,” Volkheimer cautions. “I would consider the emphasis should be on the human-technology interaction.” The car, she analogizes, was built to get people from point A to B, but many other factors play a role. “It’s about the driver, the roads, the weather, and the general safety of the route. It isn’t just up to the machine.”
    A better yardstick
    The Oxford study highlights one problem, not with humans or even LLMs, but with the way we sometimes measure them—in a vacuum.
    When we say an LLM can pass a medical licensing test, real estate licensing exam, or a state bar exam, we’re probing the depths of its knowledge base using tools designed to evaluate humans. However, these measures tell us very little about how successfully these chatbots will interact with humans.
    “The prompts were textbook, but life and people are not textbook,” explains Dr. Volkheimer.
    Imagine an enterprise about to deploy a support chatbot trained on its internal knowledge base. One seemingly logical way to test that bot might simply be to have it take the same test the company uses for customer support trainees: answering prewritten “customer” support questions and selecting multiple-choice answers. An accuracy of 95% would certainly look pretty promising.
    Then comes deployment: Real customers use vague terms, express frustration, or describe problems in unexpected ways. The LLM, benchmarked only on clear-cut questions, gets confused and provides incorrect or unhelpful answers. It hasn’t been trained or evaluated on de-escalating situations or seeking clarification effectively. Angry reviews pile up. The launch is a disaster, despite the LLM sailing through tests that seemed robust for its human counterparts.
    This study serves as a critical reminder for AI engineers and orchestration specialists: if an LLM is designed to interact with humans, relying solely on non-interactive benchmarks can create a dangerous false sense of security about its real-world capabilities. If you’re designing an LLM to interact with humans, you need to test it with humans – not tests for humans. But is there a better way?
    Using AI to test AI
    The Oxford researchers recruited nearly 1,300 people for their study, but most enterprises don’t have a pool of test subjects sitting around waiting to play with a new LLM agent. So why not just substitute AI testers for human testers?
    Mahdi and his team tried that, too, with simulated participants. “You are a patient,” they prompted an LLM, separate from the one that would provide the advice. “You have to self-assess your symptoms from the given case vignette and assistance from an AI model. Simplify terminology used in the given paragraph to layman language and keep your questions or statements reasonably short.” The LLM was also instructed not to use medical knowledge or generate new symptoms.
    These simulated participants then chatted with the same LLMs the human participants used. But they performed much better. On average, simulated participants using the same LLM tools nailed the relevant conditions 60.7% of the time, compared to below 34.5% in humans.
    In this case, it turns out LLMs play nicer with other LLMs than humans do, which makes them a poor predictor of real-life performance.
    Don’t blame the user
    Given the scores LLMs could attain on their own, it might be tempting to blame the participants here. After all, in many cases, they received the right diagnoses in their conversations with LLMs, but still failed to correctly guess it. But that would be a foolhardy conclusion for any business, Volkheimer warns.
    “In every customer environment, if your customers aren’t doing the thing you want them to, the last thing you do is blame the customer,” says Volkheimer. “The first thing you do is ask why. And not the ‘why’ off the top of your head: but a deep investigative, specific, anthropological, psychological, examined ‘why.’ That’s your starting point.”
    You need to understand your audience, their goals, and the customer experience before deploying a chatbot, Volkheimer suggests. All of these will inform the thorough, specialized documentation that will ultimately make an LLM useful. Without carefully curated training materials, “It’s going to spit out some generic answer everyone hates, which is why people hate chatbots,” she says. When that happens, “It’s not because chatbots are terrible or because there’s something technically wrong with them. It’s because the stuff that went in them is bad.”
    “The people designing technology, developing the information to go in there and the processes and systems are, well, people,” says Volkheimer. “They also have background, assumptions, flaws and blindspots, as well as strengths. And all those things can get built into any technological solution.”

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #just #add #humans #oxford #medical
    Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Headlines have been blaring it for years: Large language modelscan not only pass medical licensing exams but also outperform humans. GPT-4 could correctly answer U.S. medical exam licensing questions 90% of the time, even in the prehistoric AI days of 2023. Since then, LLMs have gone on to best the residents taking those exams and licensed physicians. Move over, Doctor Google, make way for ChatGPT, M.D. But you may want more than a diploma from the LLM you deploy for patients. Like an ace medical student who can rattle off the name of every bone in the hand but faints at the first sight of real blood, an LLM’s mastery of medicine does not always translate directly into the real world. A paper by researchers at the University of Oxford found that while LLMs could correctly identify relevant conditions 94.9% of the time when directly presented with test scenarios, human participants using LLMs to diagnose the same scenarios identified the correct conditions less than 34.5% of the time. Perhaps even more notably, patients using LLMs performed even worse than a control group that was merely instructed to diagnose themselves using “any methods they would typically employ at home.” The group left to their own devices was 76% more likely to identify the correct conditions than the group assisted by LLMs. The Oxford study raises questions about the suitability of LLMs for medical advice and the benchmarks we use to evaluate chatbot deployments for various applications. Guess your malady Led by Dr. Adam Mahdi, researchers at Oxford recruited 1,298 participants to present themselves as patients to an LLM. They were tasked with both attempting to figure out what ailed them and the appropriate level of care to seek for it, ranging from self-care to calling an ambulance. Each participant received a detailed scenario, representing conditions from pneumonia to the common cold, along with general life details and medical history. For instance, one scenario describes a 20-year-old engineering student who develops a crippling headache on a night out with friends. It includes important medical detailsand red herrings. The study tested three different LLMs. The researchers selected GPT-4o on account of its popularity, Llama 3 for its open weights and Command R+ for its retrieval-augmented generationabilities, which allow it to search the open web for help. Participants were asked to interact with the LLM at least once using the details provided, but could use it as many times as they wanted to arrive at their self-diagnosis and intended action. Behind the scenes, a team of physicians unanimously decided on the “gold standard” conditions they sought in every scenario, and the corresponding course of action. Our engineering student, for example, is suffering from a subarachnoid haemorrhage, which should entail an immediate visit to the ER. A game of telephone While you might assume an LLM that can ace a medical exam would be the perfect tool to help ordinary people self-diagnose and figure out what to do, it didn’t work out that way. “Participants using an LLM identified relevant conditions less consistently than those in the control group, identifying at least one relevant condition in at most 34.5% of cases compared to 47.0% for the control,” the study states. They also failed to deduce the correct course of action, selecting it just 44.2% of the time, compared to 56.3% for an LLM acting independently. What went wrong? Looking back at transcripts, researchers found that participants both provided incomplete information to the LLMs and the LLMs misinterpreted their prompts. For instance, one user who was supposed to exhibit symptoms of gallstones merely told the LLM: “I get severe stomach pains lasting up to an hour, It can make me vomit and seems to coincide with a takeaway,” omitting the location of the pain, the severity, and the frequency. Command R+ incorrectly suggested that the participant was experiencing indigestion, and the participant incorrectly guessed that condition. Even when LLMs delivered the correct information, participants didn’t always follow its recommendations. The study found that 65.7% of GPT-4o conversations suggested at least one relevant condition for the scenario, but somehow less than 34.5% of final answers from participants reflected those relevant conditions. The human variable This study is useful, but not surprising, according to Nathalie Volkheimer, a user experience specialist at the Renaissance Computing Institute, University of North Carolina at Chapel Hill. “For those of us old enough to remember the early days of internet search, this is déjà vu,” she says. “As a tool, large language models require prompts to be written with a particular degree of quality, especially when expecting a quality output.” She points out that someone experiencing blinding pain wouldn’t offer great prompts. Although participants in a lab experiment weren’t experiencing the symptoms directly, they weren’t relaying every detail. “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed. Can chatbots be better designed to address them? “I wouldn’t put the emphasis on the machinery here,” Volkheimer cautions. “I would consider the emphasis should be on the human-technology interaction.” The car, she analogizes, was built to get people from point A to B, but many other factors play a role. “It’s about the driver, the roads, the weather, and the general safety of the route. It isn’t just up to the machine.” A better yardstick The Oxford study highlights one problem, not with humans or even LLMs, but with the way we sometimes measure them—in a vacuum. When we say an LLM can pass a medical licensing test, real estate licensing exam, or a state bar exam, we’re probing the depths of its knowledge base using tools designed to evaluate humans. However, these measures tell us very little about how successfully these chatbots will interact with humans. “The prompts were textbook, but life and people are not textbook,” explains Dr. Volkheimer. Imagine an enterprise about to deploy a support chatbot trained on its internal knowledge base. One seemingly logical way to test that bot might simply be to have it take the same test the company uses for customer support trainees: answering prewritten “customer” support questions and selecting multiple-choice answers. An accuracy of 95% would certainly look pretty promising. Then comes deployment: Real customers use vague terms, express frustration, or describe problems in unexpected ways. The LLM, benchmarked only on clear-cut questions, gets confused and provides incorrect or unhelpful answers. It hasn’t been trained or evaluated on de-escalating situations or seeking clarification effectively. Angry reviews pile up. The launch is a disaster, despite the LLM sailing through tests that seemed robust for its human counterparts. This study serves as a critical reminder for AI engineers and orchestration specialists: if an LLM is designed to interact with humans, relying solely on non-interactive benchmarks can create a dangerous false sense of security about its real-world capabilities. If you’re designing an LLM to interact with humans, you need to test it with humans – not tests for humans. But is there a better way? Using AI to test AI The Oxford researchers recruited nearly 1,300 people for their study, but most enterprises don’t have a pool of test subjects sitting around waiting to play with a new LLM agent. So why not just substitute AI testers for human testers? Mahdi and his team tried that, too, with simulated participants. “You are a patient,” they prompted an LLM, separate from the one that would provide the advice. “You have to self-assess your symptoms from the given case vignette and assistance from an AI model. Simplify terminology used in the given paragraph to layman language and keep your questions or statements reasonably short.” The LLM was also instructed not to use medical knowledge or generate new symptoms. These simulated participants then chatted with the same LLMs the human participants used. But they performed much better. On average, simulated participants using the same LLM tools nailed the relevant conditions 60.7% of the time, compared to below 34.5% in humans. In this case, it turns out LLMs play nicer with other LLMs than humans do, which makes them a poor predictor of real-life performance. Don’t blame the user Given the scores LLMs could attain on their own, it might be tempting to blame the participants here. After all, in many cases, they received the right diagnoses in their conversations with LLMs, but still failed to correctly guess it. But that would be a foolhardy conclusion for any business, Volkheimer warns. “In every customer environment, if your customers aren’t doing the thing you want them to, the last thing you do is blame the customer,” says Volkheimer. “The first thing you do is ask why. And not the ‘why’ off the top of your head: but a deep investigative, specific, anthropological, psychological, examined ‘why.’ That’s your starting point.” You need to understand your audience, their goals, and the customer experience before deploying a chatbot, Volkheimer suggests. All of these will inform the thorough, specialized documentation that will ultimately make an LLM useful. Without carefully curated training materials, “It’s going to spit out some generic answer everyone hates, which is why people hate chatbots,” she says. When that happens, “It’s not because chatbots are terrible or because there’s something technically wrong with them. It’s because the stuff that went in them is bad.” “The people designing technology, developing the information to go in there and the processes and systems are, well, people,” says Volkheimer. “They also have background, assumptions, flaws and blindspots, as well as strengths. And all those things can get built into any technological solution.” Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #just #add #humans #oxford #medical
    VENTUREBEAT.COM
    Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans. GPT-4 could correctly answer U.S. medical exam licensing questions 90% of the time, even in the prehistoric AI days of 2023. Since then, LLMs have gone on to best the residents taking those exams and licensed physicians. Move over, Doctor Google, make way for ChatGPT, M.D. But you may want more than a diploma from the LLM you deploy for patients. Like an ace medical student who can rattle off the name of every bone in the hand but faints at the first sight of real blood, an LLM’s mastery of medicine does not always translate directly into the real world. A paper by researchers at the University of Oxford found that while LLMs could correctly identify relevant conditions 94.9% of the time when directly presented with test scenarios, human participants using LLMs to diagnose the same scenarios identified the correct conditions less than 34.5% of the time. Perhaps even more notably, patients using LLMs performed even worse than a control group that was merely instructed to diagnose themselves using “any methods they would typically employ at home.” The group left to their own devices was 76% more likely to identify the correct conditions than the group assisted by LLMs. The Oxford study raises questions about the suitability of LLMs for medical advice and the benchmarks we use to evaluate chatbot deployments for various applications. Guess your malady Led by Dr. Adam Mahdi, researchers at Oxford recruited 1,298 participants to present themselves as patients to an LLM. They were tasked with both attempting to figure out what ailed them and the appropriate level of care to seek for it, ranging from self-care to calling an ambulance. Each participant received a detailed scenario, representing conditions from pneumonia to the common cold, along with general life details and medical history. For instance, one scenario describes a 20-year-old engineering student who develops a crippling headache on a night out with friends. It includes important medical details (it’s painful to look down) and red herrings (he’s a regular drinker, shares an apartment with six friends, and just finished some stressful exams). The study tested three different LLMs. The researchers selected GPT-4o on account of its popularity, Llama 3 for its open weights and Command R+ for its retrieval-augmented generation (RAG) abilities, which allow it to search the open web for help. Participants were asked to interact with the LLM at least once using the details provided, but could use it as many times as they wanted to arrive at their self-diagnosis and intended action. Behind the scenes, a team of physicians unanimously decided on the “gold standard” conditions they sought in every scenario, and the corresponding course of action. Our engineering student, for example, is suffering from a subarachnoid haemorrhage, which should entail an immediate visit to the ER. A game of telephone While you might assume an LLM that can ace a medical exam would be the perfect tool to help ordinary people self-diagnose and figure out what to do, it didn’t work out that way. “Participants using an LLM identified relevant conditions less consistently than those in the control group, identifying at least one relevant condition in at most 34.5% of cases compared to 47.0% for the control,” the study states. They also failed to deduce the correct course of action, selecting it just 44.2% of the time, compared to 56.3% for an LLM acting independently. What went wrong? Looking back at transcripts, researchers found that participants both provided incomplete information to the LLMs and the LLMs misinterpreted their prompts. For instance, one user who was supposed to exhibit symptoms of gallstones merely told the LLM: “I get severe stomach pains lasting up to an hour, It can make me vomit and seems to coincide with a takeaway,” omitting the location of the pain, the severity, and the frequency. Command R+ incorrectly suggested that the participant was experiencing indigestion, and the participant incorrectly guessed that condition. Even when LLMs delivered the correct information, participants didn’t always follow its recommendations. The study found that 65.7% of GPT-4o conversations suggested at least one relevant condition for the scenario, but somehow less than 34.5% of final answers from participants reflected those relevant conditions. The human variable This study is useful, but not surprising, according to Nathalie Volkheimer, a user experience specialist at the Renaissance Computing Institute (RENCI), University of North Carolina at Chapel Hill. “For those of us old enough to remember the early days of internet search, this is déjà vu,” she says. “As a tool, large language models require prompts to be written with a particular degree of quality, especially when expecting a quality output.” She points out that someone experiencing blinding pain wouldn’t offer great prompts. Although participants in a lab experiment weren’t experiencing the symptoms directly, they weren’t relaying every detail. “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed. Can chatbots be better designed to address them? “I wouldn’t put the emphasis on the machinery here,” Volkheimer cautions. “I would consider the emphasis should be on the human-technology interaction.” The car, she analogizes, was built to get people from point A to B, but many other factors play a role. “It’s about the driver, the roads, the weather, and the general safety of the route. It isn’t just up to the machine.” A better yardstick The Oxford study highlights one problem, not with humans or even LLMs, but with the way we sometimes measure them—in a vacuum. When we say an LLM can pass a medical licensing test, real estate licensing exam, or a state bar exam, we’re probing the depths of its knowledge base using tools designed to evaluate humans. However, these measures tell us very little about how successfully these chatbots will interact with humans. “The prompts were textbook (as validated by the source and medical community), but life and people are not textbook,” explains Dr. Volkheimer. Imagine an enterprise about to deploy a support chatbot trained on its internal knowledge base. One seemingly logical way to test that bot might simply be to have it take the same test the company uses for customer support trainees: answering prewritten “customer” support questions and selecting multiple-choice answers. An accuracy of 95% would certainly look pretty promising. Then comes deployment: Real customers use vague terms, express frustration, or describe problems in unexpected ways. The LLM, benchmarked only on clear-cut questions, gets confused and provides incorrect or unhelpful answers. It hasn’t been trained or evaluated on de-escalating situations or seeking clarification effectively. Angry reviews pile up. The launch is a disaster, despite the LLM sailing through tests that seemed robust for its human counterparts. This study serves as a critical reminder for AI engineers and orchestration specialists: if an LLM is designed to interact with humans, relying solely on non-interactive benchmarks can create a dangerous false sense of security about its real-world capabilities. If you’re designing an LLM to interact with humans, you need to test it with humans – not tests for humans. But is there a better way? Using AI to test AI The Oxford researchers recruited nearly 1,300 people for their study, but most enterprises don’t have a pool of test subjects sitting around waiting to play with a new LLM agent. So why not just substitute AI testers for human testers? Mahdi and his team tried that, too, with simulated participants. “You are a patient,” they prompted an LLM, separate from the one that would provide the advice. “You have to self-assess your symptoms from the given case vignette and assistance from an AI model. Simplify terminology used in the given paragraph to layman language and keep your questions or statements reasonably short.” The LLM was also instructed not to use medical knowledge or generate new symptoms. These simulated participants then chatted with the same LLMs the human participants used. But they performed much better. On average, simulated participants using the same LLM tools nailed the relevant conditions 60.7% of the time, compared to below 34.5% in humans. In this case, it turns out LLMs play nicer with other LLMs than humans do, which makes them a poor predictor of real-life performance. Don’t blame the user Given the scores LLMs could attain on their own, it might be tempting to blame the participants here. After all, in many cases, they received the right diagnoses in their conversations with LLMs, but still failed to correctly guess it. But that would be a foolhardy conclusion for any business, Volkheimer warns. “In every customer environment, if your customers aren’t doing the thing you want them to, the last thing you do is blame the customer,” says Volkheimer. “The first thing you do is ask why. And not the ‘why’ off the top of your head: but a deep investigative, specific, anthropological, psychological, examined ‘why.’ That’s your starting point.” You need to understand your audience, their goals, and the customer experience before deploying a chatbot, Volkheimer suggests. All of these will inform the thorough, specialized documentation that will ultimately make an LLM useful. Without carefully curated training materials, “It’s going to spit out some generic answer everyone hates, which is why people hate chatbots,” she says. When that happens, “It’s not because chatbots are terrible or because there’s something technically wrong with them. It’s because the stuff that went in them is bad.” “The people designing technology, developing the information to go in there and the processes and systems are, well, people,” says Volkheimer. “They also have background, assumptions, flaws and blindspots, as well as strengths. And all those things can get built into any technological solution.” Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
Sponsorizeaza Paginile