• Perverse vibes, Figma’s future IPO, 20+ GenAI UX patterns

    Weekly curated resources for designers — thinkers and makers.“Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern, triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.”The perverse incentives of Vibe Coding →By fred benensonIs your research repository holding back the impact of your insights? →Join UX research experts Jake Burghardt and Emily DiLeo as they share the 6 red flags to look out for in failing repositories. Plus, get practical tips on how to build a repository that ensures your UX research delivers business value.Editor picksFigma uses nostalgia for their future in IPO →What happens when design tools grow up — and grow corporate.By Darren YeoDo people really want AI friends? →Zuckerberg seems to think so.By Daley WilhelmDesign for trust, then for possibility →From horseless carriages to robotaxis.By Sarah CordivanoThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.Nordic design gallery →Make me thinkThere should be no AI button →“It’s often unclear what the button will actually do. You may have a small text box to add a user prompt, but you’re at the mercy of the quality of an opaque system prompt.”Products need soul but markets reward scale →“Uber is the clearest example of a company that let go of the original story and embraced what the market wanted. It started out as a premium ride experience. Nice cars, polite drivers, smooth UX. Then it went public. Growth expectations took over. Fleet owners stepped in. Car quality dropped. The experience became inconsistent. And then came the ads.”About showing the “open to work” badge →“The reason might be that I do use LinkedIn professionally and that I’ve been both recruiting and being hired by large corporations. I’ve also been part of reorganisations, companies going bust and was on the wrong spreadsheet when mass layoffs happened. So I know how it feels to not have a job even when your performance was great.”Little gems this weekIs your creative character being sacrificed to Algorithm, Inc? →By Ian BatterbeeNo country for junior designers →By Patrick MorganThe next design trend should start with your hands, not a computer →By Michael F. BuckleyTools and resources20+ GenAI UX patterns →AI beyond the model.By Sharang SharmaUsing simulation models in UX research →Why it’s time we take behavior seriously.By Talieh KazemiDesign in the age of vibes →What the new wave of tools means for the future.By John MoriartySupport the newsletterIf you find our content helpful, here’s how you can support us:Check out this week’s sponsor to support their work tooForward this email to a friend and invite them to subscribeSponsor an editionPerverse vibes, Figma’s future IPO, 20+ GenAI UX patterns was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #perverse #vibes #figmas #future #ipo
    Perverse vibes, Figma’s future IPO, 20+ GenAI UX patterns
    Weekly curated resources for designers — thinkers and makers.“Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern, triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.”The perverse incentives of Vibe Coding →By fred benensonIs your research repository holding back the impact of your insights? →Join UX research experts Jake Burghardt and Emily DiLeo as they share the 6 red flags to look out for in failing repositories. Plus, get practical tips on how to build a repository that ensures your UX research delivers business value.Editor picksFigma uses nostalgia for their future in IPO →What happens when design tools grow up — and grow corporate.By Darren YeoDo people really want AI friends? →Zuckerberg seems to think so.By Daley WilhelmDesign for trust, then for possibility →From horseless carriages to robotaxis.By Sarah CordivanoThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.Nordic design gallery →Make me thinkThere should be no AI button →“It’s often unclear what the button will actually do. You may have a small text box to add a user prompt, but you’re at the mercy of the quality of an opaque system prompt.”Products need soul but markets reward scale →“Uber is the clearest example of a company that let go of the original story and embraced what the market wanted. It started out as a premium ride experience. Nice cars, polite drivers, smooth UX. Then it went public. Growth expectations took over. Fleet owners stepped in. Car quality dropped. The experience became inconsistent. And then came the ads.”About showing the “open to work” badge →“The reason might be that I do use LinkedIn professionally and that I’ve been both recruiting and being hired by large corporations. I’ve also been part of reorganisations, companies going bust and was on the wrong spreadsheet when mass layoffs happened. So I know how it feels to not have a job even when your performance was great.”Little gems this weekIs your creative character being sacrificed to Algorithm, Inc? →By Ian BatterbeeNo country for junior designers →By Patrick MorganThe next design trend should start with your hands, not a computer →By Michael F. BuckleyTools and resources20+ GenAI UX patterns →AI beyond the model.By Sharang SharmaUsing simulation models in UX research →Why it’s time we take behavior seriously.By Talieh KazemiDesign in the age of vibes →What the new wave of tools means for the future.By John MoriartySupport the newsletterIf you find our content helpful, here’s how you can support us:Check out this week’s sponsor to support their work tooForward this email to a friend and invite them to subscribeSponsor an editionPerverse vibes, Figma’s future IPO, 20+ GenAI UX patterns was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #perverse #vibes #figmas #future #ipo
    UXDESIGN.CC
    Perverse vibes, Figma’s future IPO, 20+ GenAI UX patterns
    Weekly curated resources for designers — thinkers and makers.“Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.”The perverse incentives of Vibe Coding →By fred benensonIs your research repository holding back the impact of your insights? →[Sponsored] Join UX research experts Jake Burghardt and Emily DiLeo as they share the 6 red flags to look out for in failing repositories. Plus, get practical tips on how to build a repository that ensures your UX research delivers business value.Editor picksFigma uses nostalgia for their future in IPO →What happens when design tools grow up — and grow corporate.By Darren YeoDo people really want AI friends? →Zuckerberg seems to think so.By Daley WilhelmDesign for trust, then for possibility →From horseless carriages to robotaxis.By Sarah CordivanoThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.Nordic design gallery →Make me thinkThere should be no AI button →“It’s often unclear what the button will actually do. You may have a small text box to add a user prompt, but you’re at the mercy of the quality of an opaque system prompt.”Products need soul but markets reward scale →“Uber is the clearest example of a company that let go of the original story and embraced what the market wanted. It started out as a premium ride experience. Nice cars, polite drivers, smooth UX. Then it went public. Growth expectations took over. Fleet owners stepped in. Car quality dropped. The experience became inconsistent. And then came the ads.”About showing the “open to work” badge →“The reason might be that I do use LinkedIn professionally and that I’ve been both recruiting and being hired by large corporations. I’ve also been part of reorganisations, companies going bust and was on the wrong spreadsheet when mass layoffs happened. So I know how it feels to not have a job even when your performance was great.”Little gems this weekIs your creative character being sacrificed to Algorithm, Inc? →By Ian BatterbeeNo country for junior designers →By Patrick MorganThe next design trend should start with your hands, not a computer →By Michael F. BuckleyTools and resources20+ GenAI UX patterns →AI beyond the model.By Sharang SharmaUsing simulation models in UX research →Why it’s time we take behavior seriously.By Talieh KazemiDesign in the age of vibes →What the new wave of tools means for the future.By John MoriartySupport the newsletterIf you find our content helpful, here’s how you can support us:Check out this week’s sponsor to support their work tooForward this email to a friend and invite them to subscribeSponsor an editionPerverse vibes, Figma’s future IPO, 20+ GenAI UX patterns was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares
  • Doctor Who Series 15 Episode 7 Review: Wish World

    Warning: contains spoilers for Doctor Who episode “Wish World”.
    In the penultimate episode of this season, John Smith and his loving wife Belinda live a picture-perfect life in suburbia with their very real daughter Poppy. Conrad Clark promises beautiful weather and tells light-hearted, very not-portentous stories on the TV, giant bone creatures stride across London, and everything is very normal. But Ruby Sunday is having doubts…
    How many ideas are too many?

    It’s a question that has nagged while watching this season of Doctor Who. While it’s arguably indecorous to snipe at previous eras of the show, it did sometimes feel like the Chibnall administration struggled to rustle up one killer idea per episode. That’s not been the problem with the second Russell T Davies epoch – quite the opposite, in fact. Granted, complaining about Doctor Who taking big swings is kind of like complaining about water being wet, but I’m not sure you can build a TV show on big swings alone. There are tons of ideas at play, and energy to spare, but the connective tissue isn’t always there to tie it all together.

    “Wish World”so much going on in this episode – we have to get to grips with an entirely new alternate reality, and our familiar characters’ new roles within it. We have the two Ranis, another new member of the Pantheon, Shirley’s ragtag crew of dispossessed freedom fighters, shots at reactionary conservatism, ableism, homophobia and tradwife aesthetics. The Seal of Rassilon is there. And then the climactic revelation that all this is merely a means to an end, as the Rani’strue objective becomes clear – to burrow beneath the surface of reality and find Omega, an all-powerful figure from ancient Time Lord history.
    It would be overstating it to say that the episode falls apart round about the time that Rani Primestarts monologuing to a confused Doctor about her dastardly scheme, but it’s where the cracks really start to show. It’s not the most elegant exposition that Davies has ever written, even if he does hang a cheeky lampshade on it by having the Rani explicitly refer to it as such, and making it part of her scheme. Steven Moffat tended to excel at these sorts of whirling expository scenes where everything falls into place, whereas here it very much feels like a rushed info dump connecting a bunch of disparate elements that haven’t all been adequately set up.
    It’s also here that the structure of ‘lots of ideas carried along with manic energy and high production values’ really creaks. Spending time in the wish world is great fun, with all the joys of mirror universe style stories, seeing everybody forced into perversely inappropriate roles and trying to work out exactly how this world works – or doesn’t work, as the case may be. There are lots of little grace notes, like Colonel Ibrahim’s horrified reaction when the Doctor unthinkingly reassures him that he’s “a beautiful man”, or the fascinating scene between Conrad and Mrs Flood, showing us the strain that keeping the wish alive is having on Conrad, and his uneasy relationship with the creepily chuckling god baby.
    But then the Rani starts monologuing, and it’s revealed that all of this – two years of Mrs Flood hints, the Pantheon, Conrad, the vindicators, the destruction of Earth, the wish world – is in service of reaching back into the dim and distant past of Gallifrey and finding an ancient Time Lord. A character who, if memory serves, hasn’t appeared on TV since the 1980s, apart from a blink-and-you-miss-it cameo in 2020’s “The Timeless Children”.
    It’s impossible to properly judge this reveal until we’ve seen next week’s “The Reality War”, but based on first impressions, it’s hard to feel terribly excited about the return of Omega. For an episode that’s generally so weird and spiky, and full of wonderfully unsettling imagery, finding out that it’s all building towards the reveal of a figure who really belongs in the Wilderness Years does feel a tad anticlimactic. More than that, it feels fundamentally backwards-looking, which is a bizarre thing to be saying in a review of an episode that features a giggling god baby who grants wishes. Terrifying god babies that grant wishes are not something we’ve explored much in Doctor Who, whereas ancient Time Lord history really feels like it’s been done to death.
    Of course, it could all be a feint. Perhaps the twist will be that it was about the terrifying god baby all along, and Omega will remain in the dustbin of history. But, as with last season’s reveal of Sutekh, it almost feels as though Russell T Davies – who was so careful with how he rationed out classic series characters and references during his first run – is making up for lost time by playing with as much Doctor Who lore as he can get his hands on while he has the budget to visualise it, whether it’s the most dramatically compelling choice or not. And it contributes to the uneasy feeling that, while there are plenty of new ideas being introduced in this era, the inexorable gravity of Doctor Who’s mythos is always going to overpower them, so even something as bananas as a wish-granting god baby ultimately plays second fiddle.

    Admittedly, fans do like to see stuff they recognise. I am a fan. I like to see stuff I recognise. But we should not be indulged!

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    As underwhelming as the Omega reveal is, it doesn’t scupper the episode, which is full of great little moments. Belinda rushing off into the countryside to scream is chilling – Varada Sethu is brilliant throughout, convincingly embodying a different character while still being recognisable, and her gradual horrified realisations are very well played. Ncuti Gatwa is arguably the version of the Doctor who looks the most ill at ease wearing a boring suit and doing normal domestic stuff, so that’s all compellingly off-kilter – even if it would be nice if he woke up from the illusion a bit earlier. Conrad’s sneering and the Rani’s monologuing don’t have quite the same dramatic impact when triumphantly directed at a guy who barely knows who or where he is.
    The dynamic between Mrs Flood and Rani Prime is also a lot of fun, and the design of Wish World is brilliant, from the Tim Burton-esque identikit suburbia, the bone creatures and the weird cyber-bondage drone things, down to Connor’s sharp white suit. As ever, in terms of production design and visuals, the show is firing on all cylinders. And while Davies is far from subtle when writing about social issues, the idea of the ignored and dispossessed rising up to save a society that has forsaken them is the kind of radical undercurrent that feels appropriately Doctor Who.
    But will they stick the landing? Will the Doctor escape the mother of all cliffhangers? Will we find out what’s going on with Poppy? Will we see more of Rogue? Where is Susan?
    And will Conrad get to finish his sandwich?
    Reservations aside, I’m excited to find out.

    Doctor Who series 15 concludes with “The Reality War” on Saturday May 31 on BBC One in the UK and Disney+ around the world.
    #doctor #who #series #episode #review
    Doctor Who Series 15 Episode 7 Review: Wish World
    Warning: contains spoilers for Doctor Who episode “Wish World”. In the penultimate episode of this season, John Smith and his loving wife Belinda live a picture-perfect life in suburbia with their very real daughter Poppy. Conrad Clark promises beautiful weather and tells light-hearted, very not-portentous stories on the TV, giant bone creatures stride across London, and everything is very normal. But Ruby Sunday is having doubts… How many ideas are too many? It’s a question that has nagged while watching this season of Doctor Who. While it’s arguably indecorous to snipe at previous eras of the show, it did sometimes feel like the Chibnall administration struggled to rustle up one killer idea per episode. That’s not been the problem with the second Russell T Davies epoch – quite the opposite, in fact. Granted, complaining about Doctor Who taking big swings is kind of like complaining about water being wet, but I’m not sure you can build a TV show on big swings alone. There are tons of ideas at play, and energy to spare, but the connective tissue isn’t always there to tie it all together. “Wish World”so much going on in this episode – we have to get to grips with an entirely new alternate reality, and our familiar characters’ new roles within it. We have the two Ranis, another new member of the Pantheon, Shirley’s ragtag crew of dispossessed freedom fighters, shots at reactionary conservatism, ableism, homophobia and tradwife aesthetics. The Seal of Rassilon is there. And then the climactic revelation that all this is merely a means to an end, as the Rani’strue objective becomes clear – to burrow beneath the surface of reality and find Omega, an all-powerful figure from ancient Time Lord history. It would be overstating it to say that the episode falls apart round about the time that Rani Primestarts monologuing to a confused Doctor about her dastardly scheme, but it’s where the cracks really start to show. It’s not the most elegant exposition that Davies has ever written, even if he does hang a cheeky lampshade on it by having the Rani explicitly refer to it as such, and making it part of her scheme. Steven Moffat tended to excel at these sorts of whirling expository scenes where everything falls into place, whereas here it very much feels like a rushed info dump connecting a bunch of disparate elements that haven’t all been adequately set up. It’s also here that the structure of ‘lots of ideas carried along with manic energy and high production values’ really creaks. Spending time in the wish world is great fun, with all the joys of mirror universe style stories, seeing everybody forced into perversely inappropriate roles and trying to work out exactly how this world works – or doesn’t work, as the case may be. There are lots of little grace notes, like Colonel Ibrahim’s horrified reaction when the Doctor unthinkingly reassures him that he’s “a beautiful man”, or the fascinating scene between Conrad and Mrs Flood, showing us the strain that keeping the wish alive is having on Conrad, and his uneasy relationship with the creepily chuckling god baby. But then the Rani starts monologuing, and it’s revealed that all of this – two years of Mrs Flood hints, the Pantheon, Conrad, the vindicators, the destruction of Earth, the wish world – is in service of reaching back into the dim and distant past of Gallifrey and finding an ancient Time Lord. A character who, if memory serves, hasn’t appeared on TV since the 1980s, apart from a blink-and-you-miss-it cameo in 2020’s “The Timeless Children”. It’s impossible to properly judge this reveal until we’ve seen next week’s “The Reality War”, but based on first impressions, it’s hard to feel terribly excited about the return of Omega. For an episode that’s generally so weird and spiky, and full of wonderfully unsettling imagery, finding out that it’s all building towards the reveal of a figure who really belongs in the Wilderness Years does feel a tad anticlimactic. More than that, it feels fundamentally backwards-looking, which is a bizarre thing to be saying in a review of an episode that features a giggling god baby who grants wishes. Terrifying god babies that grant wishes are not something we’ve explored much in Doctor Who, whereas ancient Time Lord history really feels like it’s been done to death. Of course, it could all be a feint. Perhaps the twist will be that it was about the terrifying god baby all along, and Omega will remain in the dustbin of history. But, as with last season’s reveal of Sutekh, it almost feels as though Russell T Davies – who was so careful with how he rationed out classic series characters and references during his first run – is making up for lost time by playing with as much Doctor Who lore as he can get his hands on while he has the budget to visualise it, whether it’s the most dramatically compelling choice or not. And it contributes to the uneasy feeling that, while there are plenty of new ideas being introduced in this era, the inexorable gravity of Doctor Who’s mythos is always going to overpower them, so even something as bananas as a wish-granting god baby ultimately plays second fiddle. Admittedly, fans do like to see stuff they recognise. I am a fan. I like to see stuff I recognise. But we should not be indulged! Join our mailing list Get the best of Den of Geek delivered right to your inbox! As underwhelming as the Omega reveal is, it doesn’t scupper the episode, which is full of great little moments. Belinda rushing off into the countryside to scream is chilling – Varada Sethu is brilliant throughout, convincingly embodying a different character while still being recognisable, and her gradual horrified realisations are very well played. Ncuti Gatwa is arguably the version of the Doctor who looks the most ill at ease wearing a boring suit and doing normal domestic stuff, so that’s all compellingly off-kilter – even if it would be nice if he woke up from the illusion a bit earlier. Conrad’s sneering and the Rani’s monologuing don’t have quite the same dramatic impact when triumphantly directed at a guy who barely knows who or where he is. The dynamic between Mrs Flood and Rani Prime is also a lot of fun, and the design of Wish World is brilliant, from the Tim Burton-esque identikit suburbia, the bone creatures and the weird cyber-bondage drone things, down to Connor’s sharp white suit. As ever, in terms of production design and visuals, the show is firing on all cylinders. And while Davies is far from subtle when writing about social issues, the idea of the ignored and dispossessed rising up to save a society that has forsaken them is the kind of radical undercurrent that feels appropriately Doctor Who. But will they stick the landing? Will the Doctor escape the mother of all cliffhangers? Will we find out what’s going on with Poppy? Will we see more of Rogue? Where is Susan? And will Conrad get to finish his sandwich? Reservations aside, I’m excited to find out. Doctor Who series 15 concludes with “The Reality War” on Saturday May 31 on BBC One in the UK and Disney+ around the world. #doctor #who #series #episode #review
    WWW.DENOFGEEK.COM
    Doctor Who Series 15 Episode 7 Review: Wish World
    Warning: contains spoilers for Doctor Who episode “Wish World”. In the penultimate episode of this season, John Smith and his loving wife Belinda live a picture-perfect life in suburbia with their very real daughter Poppy. Conrad Clark promises beautiful weather and tells light-hearted, very not-portentous stories on the TV, giant bone creatures stride across London, and everything is very normal. But Ruby Sunday is having doubts… How many ideas are too many? It’s a question that has nagged while watching this season of Doctor Who. While it’s arguably indecorous to snipe at previous eras of the show, it did sometimes feel like the Chibnall administration struggled to rustle up one killer idea per episode. That’s not been the problem with the second Russell T Davies epoch – quite the opposite, in fact. Granted, complaining about Doctor Who taking big swings is kind of like complaining about water being wet, but I’m not sure you can build a TV show on big swings alone. There are tons of ideas at play, and energy to spare (something the Chibnall era also often lacked), but the connective tissue isn’t always there to tie it all together. “Wish World”so much going on in this episode – we have to get to grips with an entirely new alternate reality, and our familiar characters’ new roles within it. We have the two Ranis, another new member of the Pantheon (a “terrifying” mystical baby with the power to grant wishes), Shirley’s ragtag crew of dispossessed freedom fighters, shots at reactionary conservatism, ableism, homophobia and tradwife aesthetics. The Seal of Rassilon is there. And then the climactic revelation that all this is merely a means to an end, as the Rani’s (Ranis’?) true objective becomes clear – to burrow beneath the surface of reality and find Omega, an all-powerful figure from ancient Time Lord history. It would be overstating it to say that the episode falls apart round about the time that Rani Prime (Archie Panjabi, having great fun chewing the appropriate quantity of scenery) starts monologuing to a confused Doctor about her dastardly scheme, but it’s where the cracks really start to show. It’s not the most elegant exposition that Davies has ever written, even if he does hang a cheeky lampshade on it by having the Rani explicitly refer to it as such, and making it part of her scheme. Steven Moffat tended to excel at these sorts of whirling expository scenes where everything falls into place, whereas here it very much feels like a rushed info dump connecting a bunch of disparate elements that haven’t all been adequately set up. It’s also here that the structure of ‘lots of ideas carried along with manic energy and high production values’ really creaks. Spending time in the wish world is great fun, with all the joys of mirror universe style stories, seeing everybody forced into perversely inappropriate roles and trying to work out exactly how this world works – or doesn’t work, as the case may be. There are lots of little grace notes, like Colonel Ibrahim’s horrified reaction when the Doctor unthinkingly reassures him that he’s “a beautiful man”, or the fascinating scene between Conrad and Mrs Flood, showing us the strain that keeping the wish alive is having on Conrad, and his uneasy relationship with the creepily chuckling god baby. But then the Rani starts monologuing, and it’s revealed that all of this – two years of Mrs Flood hints, the Pantheon, Conrad, the vindicators, the destruction of Earth, the wish world – is in service of reaching back into the dim and distant past of Gallifrey and finding an ancient Time Lord. A character who, if memory serves, hasn’t appeared on TV since the 1980s, apart from a blink-and-you-miss-it cameo in 2020’s “The Timeless Children”. It’s impossible to properly judge this reveal until we’ve seen next week’s “The Reality War”, but based on first impressions, it’s hard to feel terribly excited about the return of Omega. For an episode that’s generally so weird and spiky, and full of wonderfully unsettling imagery (like the baby’s mother gently collapsing into a pile of flowers), finding out that it’s all building towards the reveal of a figure who really belongs in the Wilderness Years does feel a tad anticlimactic. More than that, it feels fundamentally backwards-looking, which is a bizarre thing to be saying in a review of an episode that features a giggling god baby who grants wishes. Terrifying god babies that grant wishes are not something we’ve explored much in Doctor Who, whereas ancient Time Lord history really feels like it’s been done to death. Of course, it could all be a feint. Perhaps the twist will be that it was about the terrifying god baby all along, and Omega will remain in the dustbin of history. But, as with last season’s reveal of Sutekh, it almost feels as though Russell T Davies – who was so careful with how he rationed out classic series characters and references during his first run – is making up for lost time by playing with as much Doctor Who lore as he can get his hands on while he has the budget to visualise it, whether it’s the most dramatically compelling choice or not. And it contributes to the uneasy feeling that, while there are plenty of new ideas being introduced in this era, the inexorable gravity of Doctor Who’s mythos is always going to overpower them, so even something as bananas as a wish-granting god baby ultimately plays second fiddle. Admittedly, fans do like to see stuff they recognise. I am a fan. I like to see stuff I recognise. But we should not be indulged! Join our mailing list Get the best of Den of Geek delivered right to your inbox! As underwhelming as the Omega reveal is, it doesn’t scupper the episode, which is full of great little moments. Belinda rushing off into the countryside to scream is chilling – Varada Sethu is brilliant throughout, convincingly embodying a different character while still being recognisable, and her gradual horrified realisations are very well played. Ncuti Gatwa is arguably the version of the Doctor who looks the most ill at ease wearing a boring suit and doing normal domestic stuff, so that’s all compellingly off-kilter – even if it would be nice if he woke up from the illusion a bit earlier. Conrad’s sneering and the Rani’s monologuing don’t have quite the same dramatic impact when triumphantly directed at a guy who barely knows who or where he is. The dynamic between Mrs Flood and Rani Prime is also a lot of fun, and the design of Wish World is brilliant, from the Tim Burton-esque identikit suburbia, the bone creatures and the weird cyber-bondage drone things, down to Connor’s sharp white suit. As ever, in terms of production design and visuals, the show is firing on all cylinders. And while Davies is far from subtle when writing about social issues (did we really need two instances of Ruby being clumsily steered into making ableist microaggressions just so the others could chastise her for them?), the idea of the ignored and dispossessed rising up to save a society that has forsaken them is the kind of radical undercurrent that feels appropriately Doctor Who. But will they stick the landing? Will the Doctor escape the mother of all cliffhangers? Will we find out what’s going on with Poppy? Will we see more of Rogue? Where is Susan? And will Conrad get to finish his sandwich? Reservations aside, I’m excited to find out. Doctor Who series 15 concludes with “The Reality War” on Saturday May 31 on BBC One in the UK and Disney+ around the world.
    0 Comments 0 Shares
  • The perverse incentives of Vibe Coding

    Image Credit: Chat GPT o3I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say, I may be an addict. And boy is this is an expensive habit.Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern, triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.I don’t smoke, but don’t these bar graphs look like ciagrettes?Since Claude Code has been released, I have probably spent over vibe coding various projects into reality.But lets talk about the expense too, because I think there’s something bad there as well: coding agents, and especially Claude 3.7, tend to write too much code, a phenomenon that ends up costing users more than it should.Where an experienced developer might solve a problem with a few elegant lines with a thoughtful functional method, these AI systems often produce verbose, over-engineered solutions that tackle problems incrementally rather than addressing them at their core.My initial reaction was to attribute this to the relative immaturity of LLMs and their limitations when reasoning about abstract logic problems. Since these models are primarily trained to predict and generate text based on patterns they’ve seen before, it makes sense that they might struggle with the deeper architectural thinking that leads to elegant, minimal solutions.My human code on the left, Claude Code on the right implementing the same algorithmAnd indeed, the highly complex tasks I’ve handed to them have largely resulted in failure: implementing a minimax algorithm in a novel card game, crafting thoughtful animations in CSS, completely refactoring a codebase. The LLMs routinely get lost in the sauce when it comes to thinking through the high level principles required to solve difficult problems with computer science.In the example above, my human implemented version of minimax from 2018 totals 400 lines of code, whereas Claude Code’s version comes in at 627 lines. The LLM version also requires almost a dozen other library files. Granted, this version is in TypeScript and has a ton of extra bells and whistles, some of which I explicitly asked for, but the real problem is: it doesn’t actually work. Furthermore, using the LLM to debug it requires sending the bloated code back and forth to the API every time I want to holistically debug it.In an effort to impress the user and over-deliver, LLMs end up creating a rat’s nest of ultra-defensive code littered with debugging statements, neurotic comments and barely-useful helper funcitions. If you’ve ever worked in a highly functional production codebase, this is enough to drive you insane.I think everyone who spends any time vibe coding eventually discovers something like this and realizes that it’s much more worthwhile to work with a plan composed of discrete tasks that could be explained to a junior level developer vs. a feature-level project handed off to a staff engineer.There’s also the likelihood that the vast majority of code that LLMs have been trained on tends to be inelegant and overly verbose. Lord knows there’s a lot of AbstractJavaFinalSerializedFactory code out there.But I’m beginning to think the problem runs deeper, and it has to do with the economics of AI assistance.The economic incentive problemMany AI coding assistants, including Claude Code, charge based on token count — essentially the amount of text processed and generated. This creates what economists would call a “perverse incentive” — an incentive that produces behavior contrary to what’s actually desired.Let’s break down how this works:The AI generates verbose, procedural code for a given taskThis code becomes part of the context when you ask for further changes or additionsThe AI now has to readthis verbose code in every subsequent interactionMore tokens processed = more revenue for the company behind the AIThe LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom lineAs Upton Sinclair famously noted: “It is difficult to get a man to understand something when his salary depends on his not understanding it.” Similarly, it might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.The broader implicationsThis pattern points to a more general concern in AI development: the alignment between how systems are monetized and how well they serve user needs. When charging by token count, there’s naturally less incentive to optimize for elegant, minimal solutions.Even “all you can eat” subscription plansdon’t fully resolve this tension, as they typically come with usage caps or other limitations that maintain the underlying incentive structure.System instructions and verbosity trade-offsThe perverse incentives in AI code generation point to a more fundamental issue that extends beyond coding assistants. When she was reading a draft of this, Louise pointed out some recent research from Giskard AI’s Phare benchmark that reveals a troubling pattern that mirrors our coding dilemma: demanding shorter responses jeopardizes the accuracy of the answers.According to their findings, instructions emphasizing concisenesssignificantly degraded factual reliability across most models tested — in some cases causing a 20% drop in hallucination resistance. When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely. The data shows models consistently prioritize brevity over accuracy when given these constraints.There’s clearly something going on where the more verbose the LLM is, the better it does. This actually makes sense given the discovery that chain-of-thought reasoning improves accuracy, but this issue has begun to feel like a real tradeoff when it comes to these almost-magical systems.We see this exact tension in code generation every day. When we optimize for conciseness and ask for the problems to be solved in fewer setps, we often sacrifice quality. The difference is that in coding, the sacrifice manifests as over-engineered verbosity — the model produces more tokens to cover all possible edge cases rather than thinking deeply about the elegant core solution or a root cause problem. In both cases, economic incentiveswork against quality outcomes.Just as Phare’s research suggests that seemingly innocent prompts like “be concise” can sabotage a model’s ability to debunk misinformation, our experience shows that standard prompting approaches can yield bloated, inefficient code. In both domains, the fundamental misalignment between token economics and quality outputs creates a persistent tension that users must actively manage.Some tricks to manage these perverse incentivesWhile we wait for AI companies to better align their incentives with our need for elegant code, I’ve developed several strategies to counteract verbose code generation:1. Force planning before implementationI harass the LLM to write a detailed plan before generating any code. This forces the model to think through the architecture and approach, rather than diving straight into implementation details. Often, I find that a well-articulated plan leads to more concise code, as the model has already resolved the logical structure of the solution before writing a single line.2. Explicit permission protocolI’ve implemented a strict “ask before generating” protocol in my workflow. My personal CLAUDE.md file explicitly instructs Claude to request permission before writing any code. Infuriatingly, Claude Code regularly ignores this, likely due to its massive system prompt that talks so much about writing code it overrides my preferences. Enforcing this boundary and repeatedly belaboring ithelps prevent the automatic generation of unwanted, verbose solutions.3. Git-based experimentation with ruthless pruningVersion control becomes essential when working with AI-generated code. I frequently benchmark code in git when I arrive at an “ok it works as intended” moment. Creating experimental branches is also very helpful. Most importantly, I’m ready to throw out branches entirely when fixing them would require more work than starting from scratch. This willingness to abandon sunk costs is surprisingly important — it helps me work through problems and figure out the AI’s hangups while preventing the accumulation of bandaid solutions on top of fundamentally flawed approaches.4. Use a cheaper modelSometimes the simplest solution works best: using a smaller, cheaper model often results in more direct solutions. These models tend to generate less verbose code simply because they have limited context windows and processing capacity. While they might not handle extremely complex problems as well, for many day-to-day coding tasks, their constraints can actually produce more elegant solutions. For example, Claude 3.5 Haiku is currently 26% the price of Claude 3.7. Also, Claude 3.7 seems to overengineer more frequently than Claude 3.5.Moving toward better alignmentWhat might a better approach look like?LLM coding agents could evaluated and incentivized based on code quality metrics rather than just token counts. The challenge here is that this kind of metric is quite subjective.Companies could offer pricing models that reward efficiency rather than verbosityLLMs training should incorporate feedback mechanisms that specifically promote concise, elegant solutions via RLHFCompanies realize that overly verbose code generation is not good for their bottom lineThis isn’t just about getting better AI — it’s about making sure that the economic incentives driving AI development align with what we actually value as developers: clean, maintainable, elegant code that solves problems at their root.Until then, don’t forget: brevity is the soul of wit, and machines have no soul.Thanks to Louise Macfadyen, Justin Kazmark and Bethany Crystal for reading and suggesting edits to a draft of this.— -PS: Yes, I used Claude to help write this post critiquing AI verbosity. There’s a delicious irony here: these systems will happily help you articulate why they might be ripping you off. Their willingness to steelman arguments against their own economic interests shows that the perverse incentives aren’t embedded in the models themselves, but in the business decisions surrounding them. In other words, don’t blame the AI — blame the humans optimizing the revenue models. The machines are just doing what they’re told, even when that includes explaining how they’re being told to do too much.The perverse incentives of Vibe Coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #perverse #incentives #vibe #coding
    The perverse incentives of Vibe Coding
    Image Credit: Chat GPT o3I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say, I may be an addict. And boy is this is an expensive habit.Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern, triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.I don’t smoke, but don’t these bar graphs look like ciagrettes?Since Claude Code has been released, I have probably spent over vibe coding various projects into reality.But lets talk about the expense too, because I think there’s something bad there as well: coding agents, and especially Claude 3.7, tend to write too much code, a phenomenon that ends up costing users more than it should.Where an experienced developer might solve a problem with a few elegant lines with a thoughtful functional method, these AI systems often produce verbose, over-engineered solutions that tackle problems incrementally rather than addressing them at their core.My initial reaction was to attribute this to the relative immaturity of LLMs and their limitations when reasoning about abstract logic problems. Since these models are primarily trained to predict and generate text based on patterns they’ve seen before, it makes sense that they might struggle with the deeper architectural thinking that leads to elegant, minimal solutions.My human code on the left, Claude Code on the right implementing the same algorithmAnd indeed, the highly complex tasks I’ve handed to them have largely resulted in failure: implementing a minimax algorithm in a novel card game, crafting thoughtful animations in CSS, completely refactoring a codebase. The LLMs routinely get lost in the sauce when it comes to thinking through the high level principles required to solve difficult problems with computer science.In the example above, my human implemented version of minimax from 2018 totals 400 lines of code, whereas Claude Code’s version comes in at 627 lines. The LLM version also requires almost a dozen other library files. Granted, this version is in TypeScript and has a ton of extra bells and whistles, some of which I explicitly asked for, but the real problem is: it doesn’t actually work. Furthermore, using the LLM to debug it requires sending the bloated code back and forth to the API every time I want to holistically debug it.In an effort to impress the user and over-deliver, LLMs end up creating a rat’s nest of ultra-defensive code littered with debugging statements, neurotic comments and barely-useful helper funcitions. If you’ve ever worked in a highly functional production codebase, this is enough to drive you insane.I think everyone who spends any time vibe coding eventually discovers something like this and realizes that it’s much more worthwhile to work with a plan composed of discrete tasks that could be explained to a junior level developer vs. a feature-level project handed off to a staff engineer.There’s also the likelihood that the vast majority of code that LLMs have been trained on tends to be inelegant and overly verbose. Lord knows there’s a lot of AbstractJavaFinalSerializedFactory code out there.But I’m beginning to think the problem runs deeper, and it has to do with the economics of AI assistance.The economic incentive problemMany AI coding assistants, including Claude Code, charge based on token count — essentially the amount of text processed and generated. This creates what economists would call a “perverse incentive” — an incentive that produces behavior contrary to what’s actually desired.Let’s break down how this works:The AI generates verbose, procedural code for a given taskThis code becomes part of the context when you ask for further changes or additionsThe AI now has to readthis verbose code in every subsequent interactionMore tokens processed = more revenue for the company behind the AIThe LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom lineAs Upton Sinclair famously noted: “It is difficult to get a man to understand something when his salary depends on his not understanding it.” Similarly, it might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.The broader implicationsThis pattern points to a more general concern in AI development: the alignment between how systems are monetized and how well they serve user needs. When charging by token count, there’s naturally less incentive to optimize for elegant, minimal solutions.Even “all you can eat” subscription plansdon’t fully resolve this tension, as they typically come with usage caps or other limitations that maintain the underlying incentive structure.System instructions and verbosity trade-offsThe perverse incentives in AI code generation point to a more fundamental issue that extends beyond coding assistants. When she was reading a draft of this, Louise pointed out some recent research from Giskard AI’s Phare benchmark that reveals a troubling pattern that mirrors our coding dilemma: demanding shorter responses jeopardizes the accuracy of the answers.According to their findings, instructions emphasizing concisenesssignificantly degraded factual reliability across most models tested — in some cases causing a 20% drop in hallucination resistance. When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely. The data shows models consistently prioritize brevity over accuracy when given these constraints.There’s clearly something going on where the more verbose the LLM is, the better it does. This actually makes sense given the discovery that chain-of-thought reasoning improves accuracy, but this issue has begun to feel like a real tradeoff when it comes to these almost-magical systems.We see this exact tension in code generation every day. When we optimize for conciseness and ask for the problems to be solved in fewer setps, we often sacrifice quality. The difference is that in coding, the sacrifice manifests as over-engineered verbosity — the model produces more tokens to cover all possible edge cases rather than thinking deeply about the elegant core solution or a root cause problem. In both cases, economic incentiveswork against quality outcomes.Just as Phare’s research suggests that seemingly innocent prompts like “be concise” can sabotage a model’s ability to debunk misinformation, our experience shows that standard prompting approaches can yield bloated, inefficient code. In both domains, the fundamental misalignment between token economics and quality outputs creates a persistent tension that users must actively manage.Some tricks to manage these perverse incentivesWhile we wait for AI companies to better align their incentives with our need for elegant code, I’ve developed several strategies to counteract verbose code generation:1. Force planning before implementationI harass the LLM to write a detailed plan before generating any code. This forces the model to think through the architecture and approach, rather than diving straight into implementation details. Often, I find that a well-articulated plan leads to more concise code, as the model has already resolved the logical structure of the solution before writing a single line.2. Explicit permission protocolI’ve implemented a strict “ask before generating” protocol in my workflow. My personal CLAUDE.md file explicitly instructs Claude to request permission before writing any code. Infuriatingly, Claude Code regularly ignores this, likely due to its massive system prompt that talks so much about writing code it overrides my preferences. Enforcing this boundary and repeatedly belaboring ithelps prevent the automatic generation of unwanted, verbose solutions.3. Git-based experimentation with ruthless pruningVersion control becomes essential when working with AI-generated code. I frequently benchmark code in git when I arrive at an “ok it works as intended” moment. Creating experimental branches is also very helpful. Most importantly, I’m ready to throw out branches entirely when fixing them would require more work than starting from scratch. This willingness to abandon sunk costs is surprisingly important — it helps me work through problems and figure out the AI’s hangups while preventing the accumulation of bandaid solutions on top of fundamentally flawed approaches.4. Use a cheaper modelSometimes the simplest solution works best: using a smaller, cheaper model often results in more direct solutions. These models tend to generate less verbose code simply because they have limited context windows and processing capacity. While they might not handle extremely complex problems as well, for many day-to-day coding tasks, their constraints can actually produce more elegant solutions. For example, Claude 3.5 Haiku is currently 26% the price of Claude 3.7. Also, Claude 3.7 seems to overengineer more frequently than Claude 3.5.Moving toward better alignmentWhat might a better approach look like?LLM coding agents could evaluated and incentivized based on code quality metrics rather than just token counts. The challenge here is that this kind of metric is quite subjective.Companies could offer pricing models that reward efficiency rather than verbosityLLMs training should incorporate feedback mechanisms that specifically promote concise, elegant solutions via RLHFCompanies realize that overly verbose code generation is not good for their bottom lineThis isn’t just about getting better AI — it’s about making sure that the economic incentives driving AI development align with what we actually value as developers: clean, maintainable, elegant code that solves problems at their root.Until then, don’t forget: brevity is the soul of wit, and machines have no soul.Thanks to Louise Macfadyen, Justin Kazmark and Bethany Crystal for reading and suggesting edits to a draft of this.— -PS: Yes, I used Claude to help write this post critiquing AI verbosity. There’s a delicious irony here: these systems will happily help you articulate why they might be ripping you off. Their willingness to steelman arguments against their own economic interests shows that the perverse incentives aren’t embedded in the models themselves, but in the business decisions surrounding them. In other words, don’t blame the AI — blame the humans optimizing the revenue models. The machines are just doing what they’re told, even when that includes explaining how they’re being told to do too much.The perverse incentives of Vibe Coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #perverse #incentives #vibe #coding
    UXDESIGN.CC
    The perverse incentives of Vibe Coding
    Image Credit: Chat GPT o3I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say (with all due respect to people who have substance abuse issues), I may be an addict. And boy is this is an expensive habit.Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.I don’t smoke, but don’t these bar graphs look like ciagrettes?Since Claude Code has been released, I have probably spent over $1,000 vibe coding various projects into reality (some of which I hope to announce soon, don’t worry).But lets talk about the expense too, because I think there’s something bad there as well: coding agents, and especially Claude 3.7 (the backend of Claude Code), tend to write too much code, a phenomenon that ends up costing users more than it should.Where an experienced developer might solve a problem with a few elegant lines with a thoughtful functional method, these AI systems often produce verbose, over-engineered solutions that tackle problems incrementally rather than addressing them at their core.My initial reaction was to attribute this to the relative immaturity of LLMs and their limitations when reasoning about abstract logic problems. Since these models are primarily trained to predict and generate text based on patterns they’ve seen before, it makes sense that they might struggle with the deeper architectural thinking that leads to elegant, minimal solutions.My human code on the left, Claude Code on the right implementing the same algorithmAnd indeed, the highly complex tasks I’ve handed to them have largely resulted in failure: implementing a minimax algorithm in a novel card game, crafting thoughtful animations in CSS, completely refactoring a codebase. The LLMs routinely get lost in the sauce when it comes to thinking through the high level principles required to solve difficult problems with computer science.In the example above, my human implemented version of minimax from 2018 totals 400 lines of code, whereas Claude Code’s version comes in at 627 lines. The LLM version also requires almost a dozen other library files. Granted, this version is in TypeScript and has a ton of extra bells and whistles, some of which I explicitly asked for, but the real problem is: it doesn’t actually work. Furthermore, using the LLM to debug it requires sending the bloated code back and forth to the API every time I want to holistically debug it.In an effort to impress the user and over-deliver, LLMs end up creating a rat’s nest of ultra-defensive code littered with debugging statements, neurotic comments and barely-useful helper funcitions. If you’ve ever worked in a highly functional production codebase, this is enough to drive you insane.I think everyone who spends any time vibe coding eventually discovers something like this and realizes that it’s much more worthwhile to work with a plan composed of discrete tasks that could be explained to a junior level developer vs. a feature-level project handed off to a staff engineer.There’s also the likelihood that the vast majority of code that LLMs have been trained on tends to be inelegant and overly verbose. Lord knows there’s a lot of AbstractJavaFinalSerializedFactory code out there.But I’m beginning to think the problem runs deeper, and it has to do with the economics of AI assistance.The economic incentive problemMany AI coding assistants, including Claude Code, charge based on token count — essentially the amount of text processed and generated. This creates what economists would call a “perverse incentive” — an incentive that produces behavior contrary to what’s actually desired.Let’s break down how this works:The AI generates verbose, procedural code for a given taskThis code becomes part of the context when you ask for further changes or additions (this is key)The AI now has to read (and you pay for) this verbose code in every subsequent interactionMore tokens processed = more revenue for the company behind the AIThe LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom lineAs Upton Sinclair famously noted: “It is difficult to get a man to understand something when his salary depends on his not understanding it.” Similarly, it might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.The broader implicationsThis pattern points to a more general concern in AI development: the alignment between how systems are monetized and how well they serve user needs. When charging by token count, there’s naturally less incentive to optimize for elegant, minimal solutions.Even “all you can eat” subscription plans (e.g. Claude’s “Max” subscription) don’t fully resolve this tension, as they typically come with usage caps or other limitations that maintain the underlying incentive structure.System instructions and verbosity trade-offsThe perverse incentives in AI code generation point to a more fundamental issue that extends beyond coding assistants. When she was reading a draft of this, Louise pointed out some recent research from Giskard AI’s Phare benchmark that reveals a troubling pattern that mirrors our coding dilemma: demanding shorter responses jeopardizes the accuracy of the answers.According to their findings, instructions emphasizing conciseness (like “answer this question briefly”) significantly degraded factual reliability across most models tested — in some cases causing a 20% drop in hallucination resistance. When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely. The data shows models consistently prioritize brevity over accuracy when given these constraints.There’s clearly something going on where the more verbose the LLM is, the better it does. This actually makes sense given the discovery that chain-of-thought reasoning improves accuracy, but this issue has begun to feel like a real tradeoff when it comes to these almost-magical systems.We see this exact tension in code generation every day. When we optimize for conciseness and ask for the problems to be solved in fewer setps, we often sacrifice quality. The difference is that in coding, the sacrifice manifests as over-engineered verbosity — the model produces more tokens to cover all possible edge cases rather than thinking deeply about the elegant core solution or a root cause problem. In both cases, economic incentives (token optimization) work against quality outcomes (factual accuracy or elegant code).Just as Phare’s research suggests that seemingly innocent prompts like “be concise” can sabotage a model’s ability to debunk misinformation, our experience shows that standard prompting approaches can yield bloated, inefficient code. In both domains, the fundamental misalignment between token economics and quality outputs creates a persistent tension that users must actively manage.Some tricks to manage these perverse incentivesWhile we wait for AI companies to better align their incentives with our need for elegant code, I’ve developed several strategies to counteract verbose code generation:1. Force planning before implementationI harass the LLM to write a detailed plan before generating any code. This forces the model to think through the architecture and approach, rather than diving straight into implementation details. Often, I find that a well-articulated plan leads to more concise code, as the model has already resolved the logical structure of the solution before writing a single line.2. Explicit permission protocolI’ve implemented a strict “ask before generating” protocol in my workflow. My personal CLAUDE.md file explicitly instructs Claude to request permission before writing any code. Infuriatingly, Claude Code regularly ignores this, likely due to its massive system prompt that talks so much about writing code it overrides my preferences. Enforcing this boundary and repeatedly belaboring it (“remember, don’t write any code”) helps prevent the automatic generation of unwanted, verbose solutions.3. Git-based experimentation with ruthless pruningVersion control becomes essential when working with AI-generated code. I frequently benchmark code in git when I arrive at an “ok it works as intended” moment. Creating experimental branches is also very helpful. Most importantly, I’m ready to throw out branches entirely when fixing them would require more work than starting from scratch. This willingness to abandon sunk costs is surprisingly important — it helps me work through problems and figure out the AI’s hangups while preventing the accumulation of bandaid solutions on top of fundamentally flawed approaches.4. Use a cheaper modelSometimes the simplest solution works best: using a smaller, cheaper model often results in more direct solutions. These models tend to generate less verbose code simply because they have limited context windows and processing capacity. While they might not handle extremely complex problems as well, for many day-to-day coding tasks, their constraints can actually produce more elegant solutions. For example, Claude 3.5 Haiku is currently 26% the price of Claude 3.7 ($0.80 per token vs. $3). Also, Claude 3.7 seems to overengineer more frequently than Claude 3.5.Moving toward better alignmentWhat might a better approach look like?LLM coding agents could evaluated and incentivized based on code quality metrics rather than just token counts. The challenge here is that this kind of metric is quite subjective.Companies could offer pricing models that reward efficiency rather than verbosity (I have no idea how this would work, this was Claude’s dumb idea)LLMs training should incorporate feedback mechanisms that specifically promote concise, elegant solutions via RLHF (e.g. showing developers multiple versions of the same code and having them pick the opitmal one, perhaps this is already happening)Companies realize that overly verbose code generation is not good for their bottom line (e.g. Sam Altman admitted that users saying “please” and “thank you” to ChatGPT is costing them millions of dollars)This isn’t just about getting better AI — it’s about making sure that the economic incentives driving AI development align with what we actually value as developers: clean, maintainable, elegant code that solves problems at their root.Until then, don’t forget: brevity is the soul of wit, and machines have no soul.Thanks to Louise Macfadyen, Justin Kazmark and Bethany Crystal for reading and suggesting edits to a draft of this.— -PS: Yes, I used Claude to help write this post critiquing AI verbosity. There’s a delicious irony here: these systems will happily help you articulate why they might be ripping you off. Their willingness to steelman arguments against their own economic interests shows that the perverse incentives aren’t embedded in the models themselves, but in the business decisions surrounding them. In other words, don’t blame the AI — blame the humans optimizing the revenue models. The machines are just doing what they’re told, even when that includes explaining how they’re being told to do too much.The perverse incentives of Vibe Coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares
  • New images of Acme’s £1bn Liverpool Street station plans as City publishes planning application

    Documents reveal team tested options including building over station’s entire listed trainshed

    New image of Acme's proposals for the station's main entrance

    1/22
    show caption

    The City of London has published the planning application for Network Rail’s proposed redevelopment of Liverpool Station, revealing new images of how the scheme could look when built.
    The £1bn Acme designed scheme was submitted at the beginning of April and has been validated by Square Mile planners in the space of just six weeks. 
    A highly controversial previous version designed by Herzog & de Meuron for Network Rail and its former development partner Sellar, which has since been dropped, took more than six months to appear on the City’s planning portal.
    More than 20 previously unseen images of the new proposals, set to be one of the largest schemes in London, have been unveiled along with new details about how the scheme would be built.
    Acme’s plans would see a 21-storey office block built above the 1980s extension to the grade II-listed station’s train shed and a set of new vaulted gothic entrances built to replace the building’s existing gateways.
    These entrances would be faced predominately with yellow stock bricks, the same type used for the original 1875 station building and its 20th century extension, with bricks from parts of the extension set to be demolished to be reused in the new entrances.
    Acme has also proposed incorporating amber-tinted glass bricks, which will be “speckled” in the upper parts of the 18m-high entrance vaults and concentrated at the top of the concave areas of the arches between the ribs.
    Network Rail said the glass bricks will “serve as one of the contemporary subversions of an otherwise historic typology”, adding a “crystalline light scatter of the materialmark the stations thresholds as spaces of architectural interest”.
    The application documents also reveal Network Rail had considered building over the entire listed train shed roof prior to opting for a limited development over the 1980s concourse area.
    Options tested included a single block facing Exchange Square at the northern end of the train shed, three blocks spaced over the length of the train shed, elongated blocks running along either side and a large block containing multiple light wells which would sprawled over the full extent of the station.

    Options for the over-station development tested by Network Rail prior to the selection of the current proposal for a building above the concourse
    A further option to build a tower scheme over the existing Metropolitan Arcade opposite the main station building on Liverpool Street was also considered but was ruled out due to ownership issues and below ground constraints of the Circle and Elizabeth Lines.
    Network Rail initially favoured an over station development facing Exchange Square but concluded this was scrapped because of its impact on train services, its engineering complexity and difficulty in creating viable entrances.
    The preferred development above the concourse was identified as the most viable, although it will not include building above the grade II*-listed former Great Eastern Hotel which had been one of the most controversial aspects of Herzog & de Meuron’s proposals for the site.
    The plans confirm Network Rail’s pledge last year to take a more “heritage-led” approach to the redevelopment compared to the previous scheme, which had proposed interventions in a strikingly different design to the 19th century station.
    That scheme was abandoned last year with Network Rail’s development partner Sellar dropped after the application amassed more than 2,000 objections from members of the public and criticism from heritage groups including Historic England.

    A selection of Acme’s early design concepts for the station entrances
    Network Rail’s property arm, Network Rail Property, is now leading the redevelopment and has sought closer collaboration with heritage groups on the design, although the Victorian Society, which led the campaign against the previous proposals, is still objecting to the new designs and has described the planned over-station office tower as “perverse”.
    The office component is being used to fund improvements to the rest of the station, which is currently the UK’s busiest with around 118 million people a year crossing its concourse with annual passenger numbers expected to hit 158 million by 2041.
    Network Rail said the redevelopment, which will significantly enlarge the building’s concourse, will enable the station to serve more than 200 million passengers a year.
    It also aims to turn the station into a “destination in its own right” with new retail, leisure and workspace, aligning with the City of London’s Destination City ambition to diversify its economy.
    The project team includes Aecom on engineering and transport, Certo as project manager, Newmark, previously known as Gerald Eve, on planning, Gleeds as cost manager, Donald Insall Associates on heritage and townscape, GIA on daylight and sunlight and SLA as landscape architect.
    #new #images #acmes #1bn #liverpool
    New images of Acme’s £1bn Liverpool Street station plans as City publishes planning application
    Documents reveal team tested options including building over station’s entire listed trainshed New image of Acme's proposals for the station's main entrance 1/22 show caption The City of London has published the planning application for Network Rail’s proposed redevelopment of Liverpool Station, revealing new images of how the scheme could look when built. The £1bn Acme designed scheme was submitted at the beginning of April and has been validated by Square Mile planners in the space of just six weeks.  A highly controversial previous version designed by Herzog & de Meuron for Network Rail and its former development partner Sellar, which has since been dropped, took more than six months to appear on the City’s planning portal. More than 20 previously unseen images of the new proposals, set to be one of the largest schemes in London, have been unveiled along with new details about how the scheme would be built. Acme’s plans would see a 21-storey office block built above the 1980s extension to the grade II-listed station’s train shed and a set of new vaulted gothic entrances built to replace the building’s existing gateways. These entrances would be faced predominately with yellow stock bricks, the same type used for the original 1875 station building and its 20th century extension, with bricks from parts of the extension set to be demolished to be reused in the new entrances. Acme has also proposed incorporating amber-tinted glass bricks, which will be “speckled” in the upper parts of the 18m-high entrance vaults and concentrated at the top of the concave areas of the arches between the ribs. Network Rail said the glass bricks will “serve as one of the contemporary subversions of an otherwise historic typology”, adding a “crystalline light scatter of the materialmark the stations thresholds as spaces of architectural interest”. The application documents also reveal Network Rail had considered building over the entire listed train shed roof prior to opting for a limited development over the 1980s concourse area. Options tested included a single block facing Exchange Square at the northern end of the train shed, three blocks spaced over the length of the train shed, elongated blocks running along either side and a large block containing multiple light wells which would sprawled over the full extent of the station. Options for the over-station development tested by Network Rail prior to the selection of the current proposal for a building above the concourse A further option to build a tower scheme over the existing Metropolitan Arcade opposite the main station building on Liverpool Street was also considered but was ruled out due to ownership issues and below ground constraints of the Circle and Elizabeth Lines. Network Rail initially favoured an over station development facing Exchange Square but concluded this was scrapped because of its impact on train services, its engineering complexity and difficulty in creating viable entrances. The preferred development above the concourse was identified as the most viable, although it will not include building above the grade II*-listed former Great Eastern Hotel which had been one of the most controversial aspects of Herzog & de Meuron’s proposals for the site. The plans confirm Network Rail’s pledge last year to take a more “heritage-led” approach to the redevelopment compared to the previous scheme, which had proposed interventions in a strikingly different design to the 19th century station. That scheme was abandoned last year with Network Rail’s development partner Sellar dropped after the application amassed more than 2,000 objections from members of the public and criticism from heritage groups including Historic England. A selection of Acme’s early design concepts for the station entrances Network Rail’s property arm, Network Rail Property, is now leading the redevelopment and has sought closer collaboration with heritage groups on the design, although the Victorian Society, which led the campaign against the previous proposals, is still objecting to the new designs and has described the planned over-station office tower as “perverse”. The office component is being used to fund improvements to the rest of the station, which is currently the UK’s busiest with around 118 million people a year crossing its concourse with annual passenger numbers expected to hit 158 million by 2041. Network Rail said the redevelopment, which will significantly enlarge the building’s concourse, will enable the station to serve more than 200 million passengers a year. It also aims to turn the station into a “destination in its own right” with new retail, leisure and workspace, aligning with the City of London’s Destination City ambition to diversify its economy. The project team includes Aecom on engineering and transport, Certo as project manager, Newmark, previously known as Gerald Eve, on planning, Gleeds as cost manager, Donald Insall Associates on heritage and townscape, GIA on daylight and sunlight and SLA as landscape architect. #new #images #acmes #1bn #liverpool
    WWW.BDONLINE.CO.UK
    New images of Acme’s £1bn Liverpool Street station plans as City publishes planning application
    Documents reveal team tested options including building over station’s entire listed trainshed New image of Acme's proposals for the station's main entrance 1/22 show caption The City of London has published the planning application for Network Rail’s proposed redevelopment of Liverpool Station, revealing new images of how the scheme could look when built. The £1bn Acme designed scheme was submitted at the beginning of April and has been validated by Square Mile planners in the space of just six weeks.  A highly controversial previous version designed by Herzog & de Meuron for Network Rail and its former development partner Sellar, which has since been dropped, took more than six months to appear on the City’s planning portal. More than 20 previously unseen images of the new proposals, set to be one of the largest schemes in London, have been unveiled along with new details about how the scheme would be built. Acme’s plans would see a 21-storey office block built above the 1980s extension to the grade II-listed station’s train shed and a set of new vaulted gothic entrances built to replace the building’s existing gateways. These entrances would be faced predominately with yellow stock bricks, the same type used for the original 1875 station building and its 20th century extension, with bricks from parts of the extension set to be demolished to be reused in the new entrances. Acme has also proposed incorporating amber-tinted glass bricks, which will be “speckled” in the upper parts of the 18m-high entrance vaults and concentrated at the top of the concave areas of the arches between the ribs. Network Rail said the glass bricks will “serve as one of the contemporary subversions of an otherwise historic typology”, adding a “crystalline light scatter of the material [to] mark the stations thresholds as spaces of architectural interest”. The application documents also reveal Network Rail had considered building over the entire listed train shed roof prior to opting for a limited development over the 1980s concourse area. Options tested included a single block facing Exchange Square at the northern end of the train shed, three blocks spaced over the length of the train shed, elongated blocks running along either side and a large block containing multiple light wells which would sprawled over the full extent of the station. Options for the over-station development tested by Network Rail prior to the selection of the current proposal for a building above the concourse A further option to build a tower scheme over the existing Metropolitan Arcade opposite the main station building on Liverpool Street was also considered but was ruled out due to ownership issues and below ground constraints of the Circle and Elizabeth Lines. Network Rail initially favoured an over station development facing Exchange Square but concluded this was scrapped because of its impact on train services, its engineering complexity and difficulty in creating viable entrances. The preferred development above the concourse was identified as the most viable, although it will not include building above the grade II*-listed former Great Eastern Hotel which had been one of the most controversial aspects of Herzog & de Meuron’s proposals for the site. The plans confirm Network Rail’s pledge last year to take a more “heritage-led” approach to the redevelopment compared to the previous scheme, which had proposed interventions in a strikingly different design to the 19th century station. That scheme was abandoned last year with Network Rail’s development partner Sellar dropped after the application amassed more than 2,000 objections from members of the public and criticism from heritage groups including Historic England. A selection of Acme’s early design concepts for the station entrances Network Rail’s property arm, Network Rail Property, is now leading the redevelopment and has sought closer collaboration with heritage groups on the design, although the Victorian Society, which led the campaign against the previous proposals, is still objecting to the new designs and has described the planned over-station office tower as “perverse”. The office component is being used to fund improvements to the rest of the station, which is currently the UK’s busiest with around 118 million people a year crossing its concourse with annual passenger numbers expected to hit 158 million by 2041. Network Rail said the redevelopment, which will significantly enlarge the building’s concourse, will enable the station to serve more than 200 million passengers a year. It also aims to turn the station into a “destination in its own right” with new retail, leisure and workspace, aligning with the City of London’s Destination City ambition to diversify its economy. The project team includes Aecom on engineering and transport, Certo as project manager, Newmark, previously known as Gerald Eve, on planning, Gleeds as cost manager, Donald Insall Associates on heritage and townscape, GIA on daylight and sunlight and SLA as landscape architect.
    0 Comments 0 Shares