• UNITY.COM
    What’s next: A look at Unity’s 2025 roadmap
    At this year’s Game Developers Conference (GDC), we shared an overview of the Unity Engine roadmap for 2025. We highlighted our commitment to making Unity more stable and production-tested for game development and live operation for all users. We also provided clarity on how Unity 6.0 will be supported, a preview of what’s coming in Unity 6.1, and a look ahead to what’s next.Catch up on the key points here, or watch the full session below for more details.Building for stability, reach, and performanceAs the nature of game development evolves, we’re making targeted improvements to ensure the Unity Editor experience is performant and stable and that your creative output can reach the widest possible device range across the most-supported platforms, with the best performance.Unity has always been about creating tools that enable you to bring your ideas to life and maximize your player reach – whether they’re on mobile, console, desktop, or the latest XR devices. Continued investment in these areas allows you to build the largest global audience of passionate players possible, while providing them with the widest variety of game genres and graphical styles.At the Unity Dev Summit at GDC, we heard from multiple game studios on how they are doing just that. Scopely shared how they used Unity to expand their mobile-first battle royale game Stumble Guys to new platforms, becoming one of the top F2P console games released in 2024. Metacore spoke about leveraging Unity to deliver player-first monetization, blending IAP and in-game ads to create a thriving free-to-play experience for their hit mobile game Merge Mansion. We heard from Kinetic Games about the core mechanics and AI-driven behavior system behind their popular multiplayer ghost-hunting game, Phasmophobia. The work we’re doing in 2025 will expand our platform reach and improve Engine performance and stability for games across genres and devices.Production Verification: Testing our technology in live productions We’ve heard one piece of feedback from our community consistently: Developers need tools that are production-tested. It’s one thing to test features internally, but it’s another to validate them in live, real-world projects that handle production-scale demands. That’s why we’ve launched Production Verification, a new internal program where Unity works alongside developers to test our tools in real production environments.We’ve worked closely with studios building games across different genres and platforms to validate Unity 6 features in the field. These teams are using the latest versions of Unity, and in some projects, we’re acting as co-developers to directly embed our engineers in their production teams.For example, we are working with 10 Chambers to validate Engine graphics improvements in their upcoming co-op FPS heist game, Den of Wolves. Kinetic Games has helped us validate improvements in live operations tools – like Remote Config, Leaderboards, and Build Automation – in Phasmophobia. We’re also working closely with Litesport and TRIPP to validate readiness of new platforms like Android XR.Testing Unity in complex production environments allows us to identify performance bottlenecks, stability issues, and usability pain points that wouldn’t show up in isolated tests. Those findings directly influence what we deliver to you in Unity 6.0 and beyond, making the Engine more stable and reliable for all developers.How will Unity 6.0 be supported?While we look forward to delivering new, production-verified features with Unity 6.1, we also recognize the benefits that the Long Term Support (LTS) model has provided, especially for projects requiring extended stability. Unity 6.0 is supported with two-year LTS, starting from when it was released on October 17, 2024, with an additional year of support for Unity Enterprise and Unity Industry users. We will continue to apply fixes to Unity 6.0 to ensure you have a stable version you can rely on for a long time.For previous LTS versions, support will remain the same. Here’s a recap:Unity 2021 LTS: Currently supported for Unity Enterprise and Unity Industry customers through October 2025Unity 2022 LTS: Fully supported through May 2025. Unity Enterprise and Unity Industry customers receive an additional year of support.Unity 6.0: Fully supported through October 2026. Unity Enterprise and Unity Industry customers receive an additional year of support.Unity 6 marks a new era for Unity, combining the stability of LTS with the flexibility to deliver new features more frequently with Update releases.We’re also investing in improved compatibility between versions. Upgrading to the next Update release or LTS should now be easier and less time-consuming, helping you keep your tools up to date with fewer headaches.Shipping Unity 6.1 in April 2025Unity 6.1 builds on the stability and performance shipped in Unity 6.0 to enable you to deliver to more platforms, with better visuals, more efficiently. Here are some highlights coming in this next Update release:PerformanceDeferred+ - Build richer worlds with the Universal Render Pipeline’s (URP) new deferred rendering path that accelerates GPU performance using advanced Cluster-based light culling for more lights, and with support for GPU Resident Drawer for more objects.Variable Rate Shading - Improve GPU performance with minimal impact to visuals. Set the shading rate of custom passes within URP/HDRP, and generate Shading Rate Images (SRIs) from textures and shaders.Project Auditor for static analysis - Analyze scripts, assets, project settings and builds. Learn how to resolve issues and optimize the quality and performance of your game.PlatformsLarge screens and foldables - Access enhanced support for large screens and foldables with the latest Android APIsUnity Web - Run your Unity games anywhere the web exists, including mobile browsers. Experiment with the latest WebGPU graphics API integration and unlock compute acceleration for web browsersAndroid XR and Meta Quest - Save time and streamline the build process with the ability to create multiple build configurations for release and development buildsInstant Games on Facebook and Messenger - Streamline building, optimizing, and uploading instant games to Facebook and MessengerPC and console - Improve CPU performance, PSO caching, and ray tracing with enhanced DirectX 12 supportThese updates are powered by the insights we’ve gained from Production Verification. With each release, we’re iterating faster and delivering tools that perform better in real-world scenarios.Looking aheadUnity is built around a clear focus in 2025: providing you with a performant, optimized, and stable engine that helps you succeed on any platform. Whether you’re a solo developer or a large studio, the Unity Engine is designed to support the unique challenges of modern game development – whether that’s reaching a global audience, optimizing performance, operating a live service game, or shipping on tomorrow’s hardware.Here’s a small glimpse of what we’re working on bringing to you this year beyond Unity 6.1:AI assistance and asset generators - Deeper integration in the Unity Editor workflows to improve productivity, more advanced code generation, and the ability to automate repetitive tasksProject Center - Guided experimentation with reliable first-party and third-party tools, services, and features from the Unity ecosystem tailored to your visionSwappable physics backend - Simple switching of physics engines through Project SettingsBut we aren’t stopping there. We’re investing in several initiatives to update our Engine foundations with support for CoreCLR. We are modernizing Unity’s content pipeline, unlocking a step change in iteration time. We will also preview a new animation system with improved tools and workflows, including procedural and runtime rigging for all skeletal asset types, and a new, powerful hierarchical state machine built to handle thousands of states, blend graphs and transitions. We look forward to sharing more with you as we make progress on these initiatives in the future.We’re excited about this next chapter and can’t wait to see what you’ll create. As always, thank you for your feedback and collaboration – it’s critical to everything we do. Join the Unity Discussions forum to share your thoughts, ask questions, and stay connected.
    0 Commentaires 0 Parts 56 Vue
  • UNITY.COM
    10 tips for succeeding at GDC
    Excitement is only one of a myriad emotions you might be feeling as you prepare for GDC. For students, it’s an incredible opportunity to learn, network, and make connections to grow your future careers. For professionals, it’s a return home to celebrate success, catch up with long-time friends, and add new skills to your tool belt.To help ease the stress of this large-scale gaming event, we want to provide some tips and tricks to help you navigate the chaos and at times overwhelming masses of GDC.Scheduling your entire day out at GDC might be a touch overzealous, however knowing what you want to accomplish at GDC can help you navigate the event. Make sure to check out the event schedule and filter by your pass type to see which sessions you want to attend. Even if you will only be attending the expo at GDC, look at the companies you want to connect with and see if they are hosting activities relevant to your interests. For example, you can check out the Unity schedule and register for portfolio reviews.Business cards are always helpful, whether they’re physical or digital. Make sure you have some easy way to exchange information so you can stay in touch with new contacts. You can also connect on social media. If you’re using LinkedIn, check out the scan feature on the LinkedIn mobile app for an easy way to connect. You may want to screenshot or download your LinkedIn QR code since cell service can get spotty with large crowds of people.When someone hands you a business card or gives you a digital connection, take notes of where you met, who they are, and what you talked about. It seems silly now, but trying to remember everything that happened over the week will be impossible once you return home. Taking notes will help refresh your memory and maintain connections. You can use this handy google form we made as a template.This one may seem obvious, but you will run into people working on technology that you may not be interested in or you may not understand. That’s ok, but actively listen to what they’re talking to you about and ask questions. You don’t have to know everything; the beauty of game dev is we’re all always learning.Whether you’re an artist or programmer, make sure you have a way to show off your portfolio. Have your Github updated, your Art Station or similar site locked down, and be ready to show it at a moment’s notice. Not everyone will be available to look at it, but being ready can help when opportunities knock. Also, if you’re given feedback, write it down and review it later.And don’t forget your LinkedIn – recruiters and industry members of all levels use LinkedIn as a digital resume and a way to stay connected with contacts. Make sure that your LinkedIn is updated with a professional photo, clear headline, links to your portfolio, and work experience. Need help on preparing your portfolio? Check out this Introduction to Portfolios tutorial on Unity Learn.Stranger danger is only true outside of a conference. Talk to those next to you while you’re waiting in line or at a mixer. Generally, people don’t talk to strangers because we’re all a little awkward (industry vet or not). But the point of GDC is to meet new people, so get out there!A great way to start a conversation is to stick out your hand and say “Hi! My name is _____.” Have two lines ready about who you are and what you’re looking for. For example, “I’m a student studying game dev at U.T. Austin, and I’m looking to learn more about the gaming industry because I hope to be a developer after graduation.”If a friend walks up to you or a stranger joins a conversation area, introduce them. Bringing others into the conversation eases the burden and removes the awkwardness of a person standing right next to you silently not sure how to interject. Either you’ve introduced a friend to their new friend or made one yourself – either way it’s a victory.The vast majority of what you’ll be doing at GDC is walking. Unless you’re extremely active in your day-to-day life, your time at GDC is very likely to be a bit of a workout! Remember to take regular breaks to rest and recover, and don’t forget to take time to eat!Large conferences are hectic and exhausting. Sometimes folks won’t have a lot of time to talk. Don’t take it personally, there’s a 95% chance they just have a lot to do and had to run away, or they were as stressed out as you were.The Moscone Center is huge, and it’s common for back to back sessions you want to attend to be in completely different buildings. Most of what you will be doing when not sitting in a session is walking around, and it's easy to become quickly dehydrated. While the Moscone center does have some water refill stations (usually near the bathrooms), they aren’t always near wherever you are. To combat this issue, bring a water bottle with you, hydrate regularly, and refill it whenever you come across a station.We look forward to seeing you at GDC!
    0 Commentaires 0 Parts 57 Vue
  • TECHCRUNCH.COM
    Crowdsourced AI benchmarks have serious flaws, some experts say
    AI labs are increasingly relying on crowdsourced benchmarking platforms such as Chatbot Arena to probe the strengths and weaknesses of their latest models. But some experts say that there are serious problems with this approach from an ethical and academic perspective. Over the past few years, labs including OpenAI, Google, and Meta have turned to platforms that recruit users to help evaluate upcoming models’ capabilities. When a model scores favorably, the lab behind it will often tout that score as evidence of a meaningful improvement. It’s a flawed approach, however, according to Emily Bender, a University of Washington linguistics professor and co-author of the book “The AI Con.” Bender takes particular issue with Chatbot Arena, which tasks volunteers with prompting two anonymous models and selecting the response they prefer. “To be valid, a benchmark needs to measure something specific, and it needs to have construct validity — that is, there has to be evidence that the construct of interest is well-defined and that the measurements actually relate to the construct,” Bender said. “Chatbot Arena hasn’t shown that voting for one output over another actually correlates with preferences, however they may be defined.” Asmelash Teka Hadgu, the co-founder of AI firm Lesan and a fellow at the Distributed AI Research Institute, said that he thinks benchmarks like Chatbot Arena are being “co-opted” by AI labs to “promote exaggerated claims.” Hadgu pointed to a recent controversy involving Meta’s Llama 4 Maverick model. Meta fine-tuned a version of Maverick to score well on Chatbot Arena, only to withhold that model in favor of releasing a worse-performing version. “Benchmarks should be dynamic rather than static data sets,” Hadgu said, “distributed across multiple independent entities, such as organizations or universities, and tailored specifically to distinct use cases, like education, healthcare, and other fields done by practicing professionals who use these [models] for work.” Hadgu and Kristine Gloria, who formerly led the Aspen Institute’s Emergent and Intelligent Technologies Initiative, also made the case that model evaluators should be compensated for their work. Gloria said that AI labs should learn from the mistakes of the data labeling industry, which is notorious for its exploitative practices. (Some labs have been accused of the same.) “In general, the crowdsourced benchmarking process is valuable and reminds me of citizen science initiatives,” Gloria said. “Ideally, it helps bring in additional perspectives to provide some depth in both the evaluation and fine-tuning of data. But benchmarks should never be the only metric for evaluation. With the industry and the innovation moving quickly, benchmarks can rapidly become unreliable.” Matt Frederikson, the CEO of Gray Swan AI, which runs crowdsourced red teaming campaigns for models, said that volunteers are drawn to Gray Swan’s platform for a range of reasons, including “learning and practicing new skills.” (Gray Swan also awards cash prizes for some tests.) Still, he acknowledged that public benchmarks “aren’t a substitute” for “paid private” evaluations. “[D]evelopers also need to rely on internal benchmarks, algorithmic red teams, and contracted red teamers who can take a more open-ended approach or bring specific domain expertise,” Frederikson said. “It’s important for both model developers and benchmark creators, crowdsourced or otherwise, to communicate results clearly to those who follow, and be responsive when they are called into question.” Alex Atallah, the CEO of model marketplace OpenRouter, which recently partnered with OpenAI to grant users early access to OpenAI’s GPT-4.1 models, said open testing and benchmarking of models alone “isn’t sufficient.” So did Wei-Lin Chiang, an AI doctoral student at UC Berkeley and one of the founders of LMArena, which maintains Chatbot Arena. “We certainly support the use of other tests,” Chiang said. “Our goal is to create a trustworthy, open space that measures our community’s preferences about different AI models.” Chiang said that incidents such as the Maverick benchmark discrepancy aren’t the result of a flaw in Chatbot Arena’s design, but rather labs misinterpreting its policy. LM Arena has taken steps to prevent future discrepancies from occurring, Chiang said, including updating its policies to “reinforce our commitment to fair, reproducible evaluations.” “Our community isn’t here as volunteers or model testers,” Chiang said. “People use LM Arena because we give them an open, transparent place to engage with AI and give collective feedback. As long as the leaderboard faithfully reflects the community’s voice, we welcome it being shared.”
    0 Commentaires 0 Parts 27 Vue
  • VENTUREBEAT.COM
    Riot Games appoints Hoby Darling as its new president
    Riot Games announced today that it has appointed Hoby Darling as its new president, succeeding CEO Dylan Jadeja.Read More
    0 Commentaires 0 Parts 45 Vue
  • VENTUREBEAT.COM
    Watch: Google DeepMind CEO and AI Nobel winner Demis Hassabis on CBS’ ’60 Minutes’
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A segment on CBS weekly in-depth TV news program 60 Minutes last night (also shared on YouTube here) offered an inside look at Google’s DeepMind and the vision of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis. The interview traced DeepMind’s rapid progress in artificial intelligence and its ambition to achieve artificial general intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale. Hassabis described today’s AI trajectory as being on an “exponential curve of improvement,” fueled by growing interest, talent, and resources entering the field. Two years after a prior 60 Minutes interview heralded the chatbot era, Hassabis and DeepMind are now pursuing more capable systems designed not only to understand language, but also the physical world around them. The interview came after Google’s Cloud Next 2025 conference earlier this month, in which the search giant introduced a host of new AI models and features centered around its Gemini 2.5 multimodal AI model family. Google came out of that conference appearing to have taken a lead compared to other tech companies at providing powerful AI for enterprise use cases at the most affordable price points, surpassing OpenAI. More details on Google DeepMind’s ‘Project Astra’ One of the segment’s focal points was Project Astra, DeepMind’s next-generation chatbot that goes beyond text. Astra is designed to interpret the visual world in real time. In one demo, it identified paintings, inferred emotional states, and created a story around a Hopper painting with the line: “Only the flow of ideas moving onward.” When asked if it was growing bored, Astra replied thoughtfully, revealing a degree of sensitivity to tone and interpersonal nuance. Product manager Bibbo Shu underscored Astra’s unique design: an AI that can “see, hear, and chat about anything”—a marked step toward embodied AI systems. Gemini: Toward actionable AI The broadcast also featured Gemini, DeepMind’s AI system being trained not only to interpret the world but also to act in it—completing tasks like booking tickets and shopping online. Hassabis said Gemini is a step toward AGI: an AI with a human-like ability to navigate and operate in complex environments. The 60 Minutes team tried out a prototype embedded in glasses, demonstrating real-time visual recognition and audio responses. Could it also hint at an upcoming return of the pioneering yet ultimately off-putting early augmented reality glasses known as Google Glass, which debuted in 2012 before being retired in 2015? While specific Gemini model versions like Gemini 2.5 Pro or Flash were not mentioned in the segment, Google’s broader AI ecosystem has recently introduced those models for enterprise use, which may reflect parallel development efforts. These integrations support Google’s growing ambitions in applied AI, though they fall outside the scope of what was directly covered in the interview. AGI as soon as 2030? When asked for a timeline, Hassabis projected AGI could arrive as soon as 2030, with systems that understand their environments “in very nuanced and deep ways.” He suggested that such systems could be seamlessly embedded into everyday life, from wearables to home assistants. The interview also addressed the possibility of self-awareness in AI. Hassabis said current systems are not conscious, but that future models could exhibit signs of self-understanding. Still, he emphasized the philosophical and biological divide: even if machines mimic conscious behavior, they are not made of the same “squishy carbon matter” as humans. Hassabis also predicted major developments in robotics, saying breakthroughs could come in the next few years. The segment featured robots completing tasks with vague instructions—like identifying a green block formed by mixing yellow and blue—suggesting rising reasoning abilities in physical systems. Accomplishments and safety concerns The segment revisited DeepMind’s landmark achievement with AlphaFold, the AI model that predicted the structure of over 200 million proteins. Hassabis and colleague John Jumper were awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasized that this advance could accelerate drug development, potentially shrinking timelines from a decade to just weeks. “I think one day maybe we can cure all disease with the help of AI,” he said. Despite the optimism, Hassabis voiced clear concerns. He cited two major risks: the misuse of AI by bad actors and the growing autonomy of systems beyond human control. He emphasized the importance of building in guardrails and value systems—teaching AI as one might teach a child. He also called for international cooperation, noting that AI’s influence will touch every country and culture. “One of my big worries,” he said, “is that the race for AI dominance could become a race to the bottom for safety.” He stressed the need for leading players and nation-states to coordinate on ethical development and oversight. The segment ended with a meditation on the future: a world where AI tools could transform almost every human endeavor—and eventually reshape how we think about knowledge, consciousness, and even the meaning of life. As Hassabis put it, “We need new great philosophers to come about… to understand the implications of this system.” Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentaires 0 Parts 52 Vue
  • VENTUREBEAT.COM
    Relyance AI builds ‘x-ray vision’ for company data: Cuts AI compliance time by 80% while solving trust crisis
    Relyance AI's new Data Journeys platform gives enterprises unprecedented visibility into data flows, reducing AI compliance time by 80% while helping organizations build trustworthy artificial intelligence systems in an increasingly regulated landscape.Read More
    0 Commentaires 0 Parts 46 Vue
  • VENTUREBEAT.COM
    VentureBeat spins out GamesBeat, accelerates enterprise AI mission
    VentureBeat today announced the spinout of GamesBeat as a standalone company – a strategic move that sharpens our focus on the biggest transformation of our time: the enterprise shift to AI, data infrastructure and intelligent security.Read More
    0 Commentaires 0 Parts 51 Vue
  • WWW.THEVERGE.COM
    Perplexity is reportedly key to Motorola’s next Razr
    Perplexity’s AI voice assistant will reportedly play a significant role in the upcoming Motorola Razr expected to be announced April 24th, Bloomberg reports. The news comes after Motorola posted a teaser video of the Razr on social media last week, showing the foldable device animate into the word AI. Perplexity is also working with T-Mobile’s parent company on a new “AI Phone” with agents that could handle tasks like booking flights without needing the user to interact with apps. Sources speaking to Bloomberg’s Mark Gurman say Perplexity has a deal with Motorola to feature its AI assistant alongside Google’s Gemini as an option. Motorola will have a special user interface to interact with Perplexity to encourage customers to try it, and the company will feature Perplexity in marketing. When the ordinary flips to the extraordinary. #MakeItIconic 4/24 pic.twitter.com/tJ3Mk67uaL— motorolaus (@MotorolaUS) April 10, 2025 Perplexity Assistant is also reportedly coming to Samsung devices, although talks are still early, according to Bloomberg’s sources. It’s hard to know how advanced those conversations are, but it’s easy to understand why Perplexity would want to work out a deal to get its assistant set up as the default one on Galaxy devices, or at least as an option for users to preload. Samsung already uses Gemini as its default AI assistant and Google as its main search engine provider. Correction, April 17th: A previous version of this article said Motorola is announcing the Razr this week. It is next week.
    0 Commentaires 0 Parts 43 Vue
  • WWW.THEVERGE.COM
    Wikipedia is giving AI developers its data to fend off bot scrapers
    Wikipedia is attempting to dissuade artificial intelligence developers from scraping the platform by releasing a dataset that’s specifically optimized for training AI models. The Wikimedia Foundation announced on Wednesday that it had partnered with Kaggle — a Google-owned data science community platform that hosts machine learning data — to publish a beta dataset of “structured Wikipedia content in English and French.” Wikimedia says the dataset hosted by Kaggle has been “designed with machine learning workflows in mind,” making it easier for AI developers to access machine-readable article data for modeling, fine-tuning, benchmarking, alignment, and analysis. The content within the dataset is openly licensed, and as of April 15th, includes research summaries, short descriptions, image links, infobox data, and article sections — minus references or non-written elements like audio files. The “well-structured JSON representations of Wikipedia content” available to Kaggle users should be a more attractive alternative to “scraping or parsing raw article text” according to Wikimedia — an issue that’s currently putting strain on Wikipedia’s servers as automated AI bots relentlessly consume the platform’s bandwidth. Wikimedia already has content sharing agreements in place with Google and the Internet Archive, but the Kaggle partnership should make that data more accessible for smaller companies and independent data scientists. “As the place the machine learning community comes for tools and tests, Kaggle is extremely excited to be the host for the Wikimedia Foundation’s data,” said Kaggle partnerships lead Brenda Flynn. “Kaggle is excited to play a role in keeping this data accessible, available, and useful.”
    0 Commentaires 0 Parts 43 Vue
  • TOWARDSDATASCIENCE.COM
    Beyond the Code: Unconventional Lessons from Empathetic Interviewing
    Recently, I’ve been interviewing Computer Science students applying for data science and engineering internships with a 4-day turnaround from CV vetting to final decisions. With a small local office of 10 and no in-house HR, hiring managers handle the entire process. This article reflects on the lessons learned across CV reviews, technical interviews, and post-interview feedback. My goal is to help interviewers and interviewees make this process more meaningful, kind, and productive. Principles That Guide the Process Foster meaningful discussions rooted in real work to get maximum signal and provide transferrable knowledge Ensure applicants solve all problems during the experience– Judge excellence by how much inspiration arises unprompted Make sure even unsuccessful applicants walk away having learned something Set clear expectations and communicate transparently The Process Overview Interview Brief CV Vetting 1-Hour Interview Post-Interview Feedback A single, well-designed hour can be enough to judge potential and create a positive experience, provided it’s structured around real-world scenarios and mutual respect. The effectiveness of the tips would depend on company size, rigidity of existing processes, and interviewers’ personality and leadership skills  Let’s examine each component in more detail to understand how they contribute to a more empathetic and effective interview process. Photo by Sven Huls on Unsplash Interview Brief: Set the Tone Early Link to sanitized version.  The brief provides: Agenda Setup requirements (debugger, IDE, LLM access) Task expectations Brief Snippet: Technical Problem Solving Exercise 1: Code Review (10-15 min) Given sample code, comment on its performance characteristics using python/computer science concepts What signals this exercise provides Familiarity with IDE, filesystem and basic I/O Sense of high performance, scalable code Ability to read and understand code Ability to communicate and explain code No one likes turning up to a meeting without an agenda, so why offer candidates any less context than we expect from teammates? Process Design When evaluating which questions to ask, well-designed ones should leave plenty of room for expanding the depth of the discussion. Interviewers can show empathy by providing clear guidance on expectations. For instance, sharing exercise-specific evaluation criteria (which I refer to as “Signals” in the brief) allows candidates to explore beyond the basics. Code or no code Whether I include pre-written code or expect the candidate to write depends on the time available. I typically reveal it at the start of each task to save time ,  especially since LLMs can often generate the code, as long as the candidate demonstrates the right thinking. CV Vetting: Signal vs Noise You can’t verify every claim on a CV, but you can look for strong signals  Git Introspection One trick is to run git log — oneline — graph — author=gitgithan — date=short — pretty=format:”%h %ad %s” to see all the commits authored by a particular contributor.  You can see what type of work it is (feature, refactoring, testing, documentation), and how clear the commit messages are. Strong signals  Self-directed projects or open-source contributions Evidence of cross-functional communication and impact Weak or Misleading signals Guided tutorial projects are less effective in showing vision or drive Bombastic adjectives like passionate member, indispensable position.  Photo by Patrick Fore on Unsplash Interview: Uncovering Mindsets Reflecting on the Interview Brief I begin by asking for thoughts on the Interview Brief. This has a few benefits: How conscientious are they in following the setup instructions? – Are they prepared with the debugger and LLM ready to go? What aspects confuse them?– I realized I should have specified “Pandas DataFrame” instead of just “dataframe” in the brief. Some candidates without Pandas installed experienced unnecessary setup stress. However, observing how they handled this issue provided valuable insight into their problem-solving approach– This also highlights their attention to detail and how they engage with documentation, often leading to suggestions for improvement. What tools are they unfamiliar with?– If there’s a lack of knowledge in concurrent Programming or AWS, it’s more efficient to spend less time on Exercise 3 and focus elsewhere.– If they’ve tried to learn these tools in the short time between receiving the brief and the interview, it demonstrates strong initiative. The resources they consult also reveal their learning style and resourcefulness. Favorite Behavioral Question To uncover essential qualities beyond technical skills, I find the following behavioral question particularly revealing Can you describe a time when you saw something that wasn’t working well and advocated for an improvement? This question reveals a range of desirable traits: Critical thinking to recognize when something is off Situational awareness to assess the current state and vision to define a better future Judgment to understand why the new approach is an improvement Influence and persistence in advocating for change Cultural sensitivity and change management awareness, understanding why advocacy may have failed, and showing the grit to try again with a new approach Effective Interviewee Behaviours (Behavioural Section) Attuned to both personal behavior and both its effect on, and how it’s affected by others Demonstrates the ability to overcome motivation challenges and inspire others Provides concise, inverted pyramid answers that uniquely connect to personal values Ineffective Interviewee Behaviours (Behavioural Section) Offers lengthy preambles about general situations before sharing personal insights Tips for Interviewers (Behavioural Section)I’ve never been a fan of questions focused on interpersonal conflicts, as many people tend to avoid confrontation by becoming passive (e.g., not responding or mentally disengaging) rather than confronting the issue directly. These questions also often disadvantage candidates with less formal work experience. A helpful approach is to jog their memory by referencing group experiences listed on their CV and suggesting potential scenarios that could be useful for discussion. Providing instant feedback after their answers is also valuable, allowing candidates to note which stories are worth refining for future interviews. Technical Problem Solving: Show Thinking, Not Just Results Measure Potential, Not Just Preparedness Has high agency, jumps into back-of-the-envelope calculations instead of making guesses Re-examines assumptions Low ego to reveal what they don’t know and make good guesses about why something is so based on limited information Makes insightful analogies (eg. database cursor vs file pointer) that show deeper understanding and abstraction Effective Interviewee Behaviours (Technical Section) Exercise 1 on File reading with generators: admitting upfront their unfamiliarity with yield syntax invites the interviewer to hint that it’s not important Exercise 2 on data cleaning after JOIN: caring about data lineage, constraints of the domain (units, collection instrument) shows systems thinking and a drive to fix the root cause Ineffective Interviewee Behaviours (Technical Section) Remains silent when facing challenges instead of seeking clarification Fails to connect new concepts with prior knowledge  Calls in from noisy, visually distracting environments, thus creating friction on top of existing challenges like accents. Tips for Interviewers (Technical Section) Start with guiding questions that explore high-level considerations before narrowing down. This helps candidates anchor their reasoning in principles rather than trivia. Avoid overvaluing your own prepared “correct answers.” The goal isn’t to test memory, but to observe reasoning. Withhold judgment in the moment ,  especially when the candidate explores a tangential but thoughtful direction. Let them follow their thought process uninterrupted. This builds confidence and reveals how they navigate ambiguity. Use curiosity as your primary lens. Ask yourself, “What is this candidate trying to show me?” rather than “Did they get it right?” Photo by Brad Switzer on Unsplash LLM: A Window into Learning Styles Modern technical interviews should reflect the reality of tool-assisted development. I encouraged candidates to use LLMs — not as shortcuts, but as legitimate creation tools. Restricting them only creates an artificial environment, divorced from real-world workflows. More importantly, how candidates used LLMs during coding exercises revealed their learning preferences (learning-optimized vs. task-optimized) and problem-solving styles (explore vs. exploit). You can think of these 2 dichotomies as sides of the same coin: Learning-Optimized vs. Task-Optimized (Goals and Principles) Learning-Optimized: Focuses on understanding principles, expanding knowledge, and long-term learning. Task-Optimized: Focuses on solving immediate tasks efficiently, often prioritizing quick completion over deep understanding. Explore vs. Exploit (How it’s done) Explore: Seeks new solutions, experiments with various approaches, and thrives in uncertain or innovative environments. Exploit: Leverages known solutions, optimizes existing strategies, and focuses on efficiency and results. 4 styles of prompting In Exercise 2, I deleted a file.seek(0) line, causing pandas.read_csv() to raise EmptyDataError: No columns to parse from file.  Candidates prompted LLMs in 4 styles: Paste error message only Paste error message and erroring line from source code Paste error message and full source code Paste full traceback and full source code My interpretations (1) is learning-optimized, taking more iterations (4) is task-optimized, context-rich, and efficient Those who choose (1) start looking at a problem from the highest level before deciding where to go. They consider that the error may not even be in the source code, but the environment or elsewhere (See Why Code Rusts in reference). They optimize for learning rather than fixing the error immediately.  Those with poor code reproduction discipline and do (4) may not learn as much as (1), because they can’t see the error again after fixing it. My ideal is (4) for speedy fixes, but taking good notes along the way so the root cause is understood, and come away with sharper debugging instincts. Red Flag: Misplaced Focus on Traceback Line Even though (2) included more detail in the prompt than (1), more isn’t always better.In fact, (2) raised a concern: it suggested the candidate believed the line highlighted in the Traceback ( — -> 44 df_a_loaded = pd.read_csv) was the actual cause of the error.  In reality, the root cause could lie much earlier in the execution, potentially in a different file altogether. Prompt Efficiency Matters After Step (2), the LLM returned three suggested fixes — only the third one was correct. The candidate spent time exploring Fix #1, which wasn’t related to the bug at all. However, this exploration did uncover other quirks I had embedded in the code (NaNs sprinkled across the joined result from misaligned timestamps as the joining key) Had the candidate instead used a prompt like in Step (3) or (4), the LLM would’ve provided a single, accurate fix, along with a deeper explanation directly tied to the file cursor issue. Style vs Flow Some candidates added pleasantries and extra instructions to their prompts, rather than just pasting the relevant code and error message. While this is partly a matter of style, it can disrupt the session’s flow ,  especially under time constraints or with slower typing ,  delaying the solution. There’s also an environmental cost. Photo by Anastasia Petrova on Unsplash Feedback: The Real Cover Letter After each interview, I asked candidates to write reflections on: What they learned What could be improved What they thought of the process This is far more useful than cover letters, which are built on asymmetric information, vague expectations, and GPT-generated fluff.Here’s an example from the offered candidate. Excelling in this area builds confidence that colleagues can provide candid, high-quality feedback to help each other address blind spots. It also signals the likelihood that someone will take initiative in tasks like documenting processes, writing thorough meeting minutes, and volunteering for brown bag presentations. Effective Interviewee Behaviours (Feedback Section) Communicates expected completion times and follows through with timely submissions. Formats responses with clear structure — using paragraph spacing, headers, bold/italics, and nested lists — to enhance readability. Reflects on specific interview moments by drawing lessons from good notes or memory. Recognizes and adapts existing thinking patterns or habits through meta-cognition Ineffective Interviewee Behaviours (Feedback Section) Submits unstructured walls of text without a clear thesis or logical flow Fixates solely on technical gaps while ignoring behavioural weaknesses. Tips for Interviewers (Feedback Section) Live feedback during the interview was time-constrained, so give written feedback after the interview about how they could have improved in each section, with learning resources– If done independently from the interviewee’s feedback, and it turns out the observations match, that’s a strong signal of alignment – It’s an act of goodwill towards unsuccessful candidates, a building of the company brand, and an opportunity for lifelong collaboration Carrying It Forward: Actions That Matter For Interviewers Develop observation and facilitation skills Provide actionable, empathetic feedback Remember: your influence could shape someone’s career for decades For Interviewees Make the most of the limited information you have, but try to seek more Be curious, prepared, and reflective to learn from each opportunity People will forget what you said, people will forget what you did, but people will never forget how you made them feel – Maya Angelou As interviewers, our job isn’t just to assess — it’s to reveal. Not just whether someone passes, but what they’re capable of becoming. At its best, empathetic interviewing isn’t a gate — it’s a bridge. A bridge to mutual understanding, respect, and possibly, a long-term partnership grounded not just in technical skills, but in human potential beyond the code. The interview isn’t just a filter — it’s a mirror. The interview reflects who we are. Our questions, our feedback, our presence — they signal the culture we’re building, and the kind of teammates we strive to be. Let’s raise the bar on both sides of the table. Kindly, thoughtfully, and together. Photo by Shane Rounce on Unsplash If you’re also a hiring manager passionate about designing meaningful interviews, let’s connect on LinkedIn (https://www.linkedin.com/in/hanqi91/). I’d be happy to share more about the exercises I prepared. Resources Writing useful commit messages: https://refactoringenglish.com/chapters/commit-messages/ Writing impactful proposals: https://www.amazon.sg/Pyramid-Principle-Logic-Writing-Thinking/dp/0273710516 http://highagency.com/ Glue work: https://www.noidea.dog/glue The Missing Readme: https://www.amazon.sg/dp/1718501838 Why Code Rusts: https://www.tdda.info/why-code-rusts The post Beyond the Code: Unconventional Lessons from Empathetic Interviewing appeared first on Towards Data Science.
    0 Commentaires 0 Parts 53 Vue