• VENTUREBEAT.COM
    Watch: Google DeepMind CEO and AI Nobel winner Demis Hassabis on CBS’ ’60 Minutes’
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A segment on CBS weekly in-depth TV news program 60 Minutes last night (also shared on YouTube here) offered an inside look at Google’s DeepMind and the vision of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis. The interview traced DeepMind’s rapid progress in artificial intelligence and its ambition to achieve artificial general intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale. Hassabis described today’s AI trajectory as being on an “exponential curve of improvement,” fueled by growing interest, talent, and resources entering the field. Two years after a prior 60 Minutes interview heralded the chatbot era, Hassabis and DeepMind are now pursuing more capable systems designed not only to understand language, but also the physical world around them. The interview came after Google’s Cloud Next 2025 conference earlier this month, in which the search giant introduced a host of new AI models and features centered around its Gemini 2.5 multimodal AI model family. Google came out of that conference appearing to have taken a lead compared to other tech companies at providing powerful AI for enterprise use cases at the most affordable price points, surpassing OpenAI. More details on Google DeepMind’s ‘Project Astra’ One of the segment’s focal points was Project Astra, DeepMind’s next-generation chatbot that goes beyond text. Astra is designed to interpret the visual world in real time. In one demo, it identified paintings, inferred emotional states, and created a story around a Hopper painting with the line: “Only the flow of ideas moving onward.” When asked if it was growing bored, Astra replied thoughtfully, revealing a degree of sensitivity to tone and interpersonal nuance. Product manager Bibbo Shu underscored Astra’s unique design: an AI that can “see, hear, and chat about anything”—a marked step toward embodied AI systems. Gemini: Toward actionable AI The broadcast also featured Gemini, DeepMind’s AI system being trained not only to interpret the world but also to act in it—completing tasks like booking tickets and shopping online. Hassabis said Gemini is a step toward AGI: an AI with a human-like ability to navigate and operate in complex environments. The 60 Minutes team tried out a prototype embedded in glasses, demonstrating real-time visual recognition and audio responses. Could it also hint at an upcoming return of the pioneering yet ultimately off-putting early augmented reality glasses known as Google Glass, which debuted in 2012 before being retired in 2015? While specific Gemini model versions like Gemini 2.5 Pro or Flash were not mentioned in the segment, Google’s broader AI ecosystem has recently introduced those models for enterprise use, which may reflect parallel development efforts. These integrations support Google’s growing ambitions in applied AI, though they fall outside the scope of what was directly covered in the interview. AGI as soon as 2030? When asked for a timeline, Hassabis projected AGI could arrive as soon as 2030, with systems that understand their environments “in very nuanced and deep ways.” He suggested that such systems could be seamlessly embedded into everyday life, from wearables to home assistants. The interview also addressed the possibility of self-awareness in AI. Hassabis said current systems are not conscious, but that future models could exhibit signs of self-understanding. Still, he emphasized the philosophical and biological divide: even if machines mimic conscious behavior, they are not made of the same “squishy carbon matter” as humans. Hassabis also predicted major developments in robotics, saying breakthroughs could come in the next few years. The segment featured robots completing tasks with vague instructions—like identifying a green block formed by mixing yellow and blue—suggesting rising reasoning abilities in physical systems. Accomplishments and safety concerns The segment revisited DeepMind’s landmark achievement with AlphaFold, the AI model that predicted the structure of over 200 million proteins. Hassabis and colleague John Jumper were awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasized that this advance could accelerate drug development, potentially shrinking timelines from a decade to just weeks. “I think one day maybe we can cure all disease with the help of AI,” he said. Despite the optimism, Hassabis voiced clear concerns. He cited two major risks: the misuse of AI by bad actors and the growing autonomy of systems beyond human control. He emphasized the importance of building in guardrails and value systems—teaching AI as one might teach a child. He also called for international cooperation, noting that AI’s influence will touch every country and culture. “One of my big worries,” he said, “is that the race for AI dominance could become a race to the bottom for safety.” He stressed the need for leading players and nation-states to coordinate on ethical development and oversight. The segment ended with a meditation on the future: a world where AI tools could transform almost every human endeavor—and eventually reshape how we think about knowledge, consciousness, and even the meaning of life. As Hassabis put it, “We need new great philosophers to come about… to understand the implications of this system.” Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Σχόλια 0 Μοιράστηκε 45 Views
  • VENTUREBEAT.COM
    Relyance AI builds ‘x-ray vision’ for company data: Cuts AI compliance time by 80% while solving trust crisis
    Relyance AI's new Data Journeys platform gives enterprises unprecedented visibility into data flows, reducing AI compliance time by 80% while helping organizations build trustworthy artificial intelligence systems in an increasingly regulated landscape.Read More
    0 Σχόλια 0 Μοιράστηκε 38 Views
  • VENTUREBEAT.COM
    VentureBeat spins out GamesBeat, accelerates enterprise AI mission
    VentureBeat today announced the spinout of GamesBeat as a standalone company – a strategic move that sharpens our focus on the biggest transformation of our time: the enterprise shift to AI, data infrastructure and intelligent security.Read More
    0 Σχόλια 0 Μοιράστηκε 43 Views
  • WWW.THEVERGE.COM
    Perplexity is reportedly key to Motorola’s next Razr
    Perplexity’s AI voice assistant will reportedly play a significant role in the upcoming Motorola Razr expected to be announced April 24th, Bloomberg reports. The news comes after Motorola posted a teaser video of the Razr on social media last week, showing the foldable device animate into the word AI. Perplexity is also working with T-Mobile’s parent company on a new “AI Phone” with agents that could handle tasks like booking flights without needing the user to interact with apps. Sources speaking to Bloomberg’s Mark Gurman say Perplexity has a deal with Motorola to feature its AI assistant alongside Google’s Gemini as an option. Motorola will have a special user interface to interact with Perplexity to encourage customers to try it, and the company will feature Perplexity in marketing. When the ordinary flips to the extraordinary. #MakeItIconic 4/24 pic.twitter.com/tJ3Mk67uaL— motorolaus (@MotorolaUS) April 10, 2025 Perplexity Assistant is also reportedly coming to Samsung devices, although talks are still early, according to Bloomberg’s sources. It’s hard to know how advanced those conversations are, but it’s easy to understand why Perplexity would want to work out a deal to get its assistant set up as the default one on Galaxy devices, or at least as an option for users to preload. Samsung already uses Gemini as its default AI assistant and Google as its main search engine provider. Correction, April 17th: A previous version of this article said Motorola is announcing the Razr this week. It is next week.
    0 Σχόλια 0 Μοιράστηκε 28 Views
  • WWW.THEVERGE.COM
    Wikipedia is giving AI developers its data to fend off bot scrapers
    Wikipedia is attempting to dissuade artificial intelligence developers from scraping the platform by releasing a dataset that’s specifically optimized for training AI models. The Wikimedia Foundation announced on Wednesday that it had partnered with Kaggle — a Google-owned data science community platform that hosts machine learning data — to publish a beta dataset of “structured Wikipedia content in English and French.” Wikimedia says the dataset hosted by Kaggle has been “designed with machine learning workflows in mind,” making it easier for AI developers to access machine-readable article data for modeling, fine-tuning, benchmarking, alignment, and analysis. The content within the dataset is openly licensed, and as of April 15th, includes research summaries, short descriptions, image links, infobox data, and article sections — minus references or non-written elements like audio files. The “well-structured JSON representations of Wikipedia content” available to Kaggle users should be a more attractive alternative to “scraping or parsing raw article text” according to Wikimedia — an issue that’s currently putting strain on Wikipedia’s servers as automated AI bots relentlessly consume the platform’s bandwidth. Wikimedia already has content sharing agreements in place with Google and the Internet Archive, but the Kaggle partnership should make that data more accessible for smaller companies and independent data scientists. “As the place the machine learning community comes for tools and tests, Kaggle is extremely excited to be the host for the Wikimedia Foundation’s data,” said Kaggle partnerships lead Brenda Flynn. “Kaggle is excited to play a role in keeping this data accessible, available, and useful.”
    0 Σχόλια 0 Μοιράστηκε 28 Views
  • TOWARDSDATASCIENCE.COM
    Beyond the Code: Unconventional Lessons from Empathetic Interviewing
    Recently, I’ve been interviewing Computer Science students applying for data science and engineering internships with a 4-day turnaround from CV vetting to final decisions. With a small local office of 10 and no in-house HR, hiring managers handle the entire process. This article reflects on the lessons learned across CV reviews, technical interviews, and post-interview feedback. My goal is to help interviewers and interviewees make this process more meaningful, kind, and productive. Principles That Guide the Process Foster meaningful discussions rooted in real work to get maximum signal and provide transferrable knowledge Ensure applicants solve all problems during the experience– Judge excellence by how much inspiration arises unprompted Make sure even unsuccessful applicants walk away having learned something Set clear expectations and communicate transparently The Process Overview Interview Brief CV Vetting 1-Hour Interview Post-Interview Feedback A single, well-designed hour can be enough to judge potential and create a positive experience, provided it’s structured around real-world scenarios and mutual respect. The effectiveness of the tips would depend on company size, rigidity of existing processes, and interviewers’ personality and leadership skills  Let’s examine each component in more detail to understand how they contribute to a more empathetic and effective interview process. Photo by Sven Huls on Unsplash Interview Brief: Set the Tone Early Link to sanitized version.  The brief provides: Agenda Setup requirements (debugger, IDE, LLM access) Task expectations Brief Snippet: Technical Problem Solving Exercise 1: Code Review (10-15 min) Given sample code, comment on its performance characteristics using python/computer science concepts What signals this exercise provides Familiarity with IDE, filesystem and basic I/O Sense of high performance, scalable code Ability to read and understand code Ability to communicate and explain code No one likes turning up to a meeting without an agenda, so why offer candidates any less context than we expect from teammates? Process Design When evaluating which questions to ask, well-designed ones should leave plenty of room for expanding the depth of the discussion. Interviewers can show empathy by providing clear guidance on expectations. For instance, sharing exercise-specific evaluation criteria (which I refer to as “Signals” in the brief) allows candidates to explore beyond the basics. Code or no code Whether I include pre-written code or expect the candidate to write depends on the time available. I typically reveal it at the start of each task to save time ,  especially since LLMs can often generate the code, as long as the candidate demonstrates the right thinking. CV Vetting: Signal vs Noise You can’t verify every claim on a CV, but you can look for strong signals  Git Introspection One trick is to run git log — oneline — graph — author=gitgithan — date=short — pretty=format:”%h %ad %s” to see all the commits authored by a particular contributor.  You can see what type of work it is (feature, refactoring, testing, documentation), and how clear the commit messages are. Strong signals  Self-directed projects or open-source contributions Evidence of cross-functional communication and impact Weak or Misleading signals Guided tutorial projects are less effective in showing vision or drive Bombastic adjectives like passionate member, indispensable position.  Photo by Patrick Fore on Unsplash Interview: Uncovering Mindsets Reflecting on the Interview Brief I begin by asking for thoughts on the Interview Brief. This has a few benefits: How conscientious are they in following the setup instructions? – Are they prepared with the debugger and LLM ready to go? What aspects confuse them?– I realized I should have specified “Pandas DataFrame” instead of just “dataframe” in the brief. Some candidates without Pandas installed experienced unnecessary setup stress. However, observing how they handled this issue provided valuable insight into their problem-solving approach– This also highlights their attention to detail and how they engage with documentation, often leading to suggestions for improvement. What tools are they unfamiliar with?– If there’s a lack of knowledge in concurrent Programming or AWS, it’s more efficient to spend less time on Exercise 3 and focus elsewhere.– If they’ve tried to learn these tools in the short time between receiving the brief and the interview, it demonstrates strong initiative. The resources they consult also reveal their learning style and resourcefulness. Favorite Behavioral Question To uncover essential qualities beyond technical skills, I find the following behavioral question particularly revealing Can you describe a time when you saw something that wasn’t working well and advocated for an improvement? This question reveals a range of desirable traits: Critical thinking to recognize when something is off Situational awareness to assess the current state and vision to define a better future Judgment to understand why the new approach is an improvement Influence and persistence in advocating for change Cultural sensitivity and change management awareness, understanding why advocacy may have failed, and showing the grit to try again with a new approach Effective Interviewee Behaviours (Behavioural Section) Attuned to both personal behavior and both its effect on, and how it’s affected by others Demonstrates the ability to overcome motivation challenges and inspire others Provides concise, inverted pyramid answers that uniquely connect to personal values Ineffective Interviewee Behaviours (Behavioural Section) Offers lengthy preambles about general situations before sharing personal insights Tips for Interviewers (Behavioural Section)I’ve never been a fan of questions focused on interpersonal conflicts, as many people tend to avoid confrontation by becoming passive (e.g., not responding or mentally disengaging) rather than confronting the issue directly. These questions also often disadvantage candidates with less formal work experience. A helpful approach is to jog their memory by referencing group experiences listed on their CV and suggesting potential scenarios that could be useful for discussion. Providing instant feedback after their answers is also valuable, allowing candidates to note which stories are worth refining for future interviews. Technical Problem Solving: Show Thinking, Not Just Results Measure Potential, Not Just Preparedness Has high agency, jumps into back-of-the-envelope calculations instead of making guesses Re-examines assumptions Low ego to reveal what they don’t know and make good guesses about why something is so based on limited information Makes insightful analogies (eg. database cursor vs file pointer) that show deeper understanding and abstraction Effective Interviewee Behaviours (Technical Section) Exercise 1 on File reading with generators: admitting upfront their unfamiliarity with yield syntax invites the interviewer to hint that it’s not important Exercise 2 on data cleaning after JOIN: caring about data lineage, constraints of the domain (units, collection instrument) shows systems thinking and a drive to fix the root cause Ineffective Interviewee Behaviours (Technical Section) Remains silent when facing challenges instead of seeking clarification Fails to connect new concepts with prior knowledge  Calls in from noisy, visually distracting environments, thus creating friction on top of existing challenges like accents. Tips for Interviewers (Technical Section) Start with guiding questions that explore high-level considerations before narrowing down. This helps candidates anchor their reasoning in principles rather than trivia. Avoid overvaluing your own prepared “correct answers.” The goal isn’t to test memory, but to observe reasoning. Withhold judgment in the moment ,  especially when the candidate explores a tangential but thoughtful direction. Let them follow their thought process uninterrupted. This builds confidence and reveals how they navigate ambiguity. Use curiosity as your primary lens. Ask yourself, “What is this candidate trying to show me?” rather than “Did they get it right?” Photo by Brad Switzer on Unsplash LLM: A Window into Learning Styles Modern technical interviews should reflect the reality of tool-assisted development. I encouraged candidates to use LLMs — not as shortcuts, but as legitimate creation tools. Restricting them only creates an artificial environment, divorced from real-world workflows. More importantly, how candidates used LLMs during coding exercises revealed their learning preferences (learning-optimized vs. task-optimized) and problem-solving styles (explore vs. exploit). You can think of these 2 dichotomies as sides of the same coin: Learning-Optimized vs. Task-Optimized (Goals and Principles) Learning-Optimized: Focuses on understanding principles, expanding knowledge, and long-term learning. Task-Optimized: Focuses on solving immediate tasks efficiently, often prioritizing quick completion over deep understanding. Explore vs. Exploit (How it’s done) Explore: Seeks new solutions, experiments with various approaches, and thrives in uncertain or innovative environments. Exploit: Leverages known solutions, optimizes existing strategies, and focuses on efficiency and results. 4 styles of prompting In Exercise 2, I deleted a file.seek(0) line, causing pandas.read_csv() to raise EmptyDataError: No columns to parse from file.  Candidates prompted LLMs in 4 styles: Paste error message only Paste error message and erroring line from source code Paste error message and full source code Paste full traceback and full source code My interpretations (1) is learning-optimized, taking more iterations (4) is task-optimized, context-rich, and efficient Those who choose (1) start looking at a problem from the highest level before deciding where to go. They consider that the error may not even be in the source code, but the environment or elsewhere (See Why Code Rusts in reference). They optimize for learning rather than fixing the error immediately.  Those with poor code reproduction discipline and do (4) may not learn as much as (1), because they can’t see the error again after fixing it. My ideal is (4) for speedy fixes, but taking good notes along the way so the root cause is understood, and come away with sharper debugging instincts. Red Flag: Misplaced Focus on Traceback Line Even though (2) included more detail in the prompt than (1), more isn’t always better.In fact, (2) raised a concern: it suggested the candidate believed the line highlighted in the Traceback ( — -> 44 df_a_loaded = pd.read_csv) was the actual cause of the error.  In reality, the root cause could lie much earlier in the execution, potentially in a different file altogether. Prompt Efficiency Matters After Step (2), the LLM returned three suggested fixes — only the third one was correct. The candidate spent time exploring Fix #1, which wasn’t related to the bug at all. However, this exploration did uncover other quirks I had embedded in the code (NaNs sprinkled across the joined result from misaligned timestamps as the joining key) Had the candidate instead used a prompt like in Step (3) or (4), the LLM would’ve provided a single, accurate fix, along with a deeper explanation directly tied to the file cursor issue. Style vs Flow Some candidates added pleasantries and extra instructions to their prompts, rather than just pasting the relevant code and error message. While this is partly a matter of style, it can disrupt the session’s flow ,  especially under time constraints or with slower typing ,  delaying the solution. There’s also an environmental cost. Photo by Anastasia Petrova on Unsplash Feedback: The Real Cover Letter After each interview, I asked candidates to write reflections on: What they learned What could be improved What they thought of the process This is far more useful than cover letters, which are built on asymmetric information, vague expectations, and GPT-generated fluff.Here’s an example from the offered candidate. Excelling in this area builds confidence that colleagues can provide candid, high-quality feedback to help each other address blind spots. It also signals the likelihood that someone will take initiative in tasks like documenting processes, writing thorough meeting minutes, and volunteering for brown bag presentations. Effective Interviewee Behaviours (Feedback Section) Communicates expected completion times and follows through with timely submissions. Formats responses with clear structure — using paragraph spacing, headers, bold/italics, and nested lists — to enhance readability. Reflects on specific interview moments by drawing lessons from good notes or memory. Recognizes and adapts existing thinking patterns or habits through meta-cognition Ineffective Interviewee Behaviours (Feedback Section) Submits unstructured walls of text without a clear thesis or logical flow Fixates solely on technical gaps while ignoring behavioural weaknesses. Tips for Interviewers (Feedback Section) Live feedback during the interview was time-constrained, so give written feedback after the interview about how they could have improved in each section, with learning resources– If done independently from the interviewee’s feedback, and it turns out the observations match, that’s a strong signal of alignment – It’s an act of goodwill towards unsuccessful candidates, a building of the company brand, and an opportunity for lifelong collaboration Carrying It Forward: Actions That Matter For Interviewers Develop observation and facilitation skills Provide actionable, empathetic feedback Remember: your influence could shape someone’s career for decades For Interviewees Make the most of the limited information you have, but try to seek more Be curious, prepared, and reflective to learn from each opportunity People will forget what you said, people will forget what you did, but people will never forget how you made them feel – Maya Angelou As interviewers, our job isn’t just to assess — it’s to reveal. Not just whether someone passes, but what they’re capable of becoming. At its best, empathetic interviewing isn’t a gate — it’s a bridge. A bridge to mutual understanding, respect, and possibly, a long-term partnership grounded not just in technical skills, but in human potential beyond the code. The interview isn’t just a filter — it’s a mirror. The interview reflects who we are. Our questions, our feedback, our presence — they signal the culture we’re building, and the kind of teammates we strive to be. Let’s raise the bar on both sides of the table. Kindly, thoughtfully, and together. Photo by Shane Rounce on Unsplash If you’re also a hiring manager passionate about designing meaningful interviews, let’s connect on LinkedIn (https://www.linkedin.com/in/hanqi91/). I’d be happy to share more about the exercises I prepared. Resources Writing useful commit messages: https://refactoringenglish.com/chapters/commit-messages/ Writing impactful proposals: https://www.amazon.sg/Pyramid-Principle-Logic-Writing-Thinking/dp/0273710516 http://highagency.com/ Glue work: https://www.noidea.dog/glue The Missing Readme: https://www.amazon.sg/dp/1718501838 Why Code Rusts: https://www.tdda.info/why-code-rusts The post Beyond the Code: Unconventional Lessons from Empathetic Interviewing appeared first on Towards Data Science.
    0 Σχόλια 0 Μοιράστηκε 43 Views
  • TOWARDSDATASCIENCE.COM
    How to Write Queries for Tabular Models with DAX
    Introduction EVALUATE is the statement to query tabular models. Unfortunately, knowing SQL or any other query language doesn’t help as EVALUATE follows a different concept. EVALUATE has only two “Parameters”: A table to show A sort order (ORDER BY) You can pass a third parameter (START AT), but this one is rarely used. However, a DAX query can have additional components. Those are defined in the DEFINE section of the query.In the DEFINE section, you can define Variables and local Measures.You can use the COLUMN and TABLE keywords in EVALUATE, which I have never used until now. Let’s start with some simple Queries and add some additional logic step by step. However, first, let’s discuss the Tools. Querying tools There are two possibilities for querying a tabular model: Using the DAX query view in Power BI Desktop. Using DAX Studio. Of course, the syntax is the same. I prefer DAX Studio over DAX query view. It offers advanced features not available in Power BI Desktop, such as performance statistics with Server Timing and displaying the model’s metrics. On the other hand, the DAX query view in Power BI Desktop provides the option to apply changes in a Measure back to the model directly after I have modified them in the query. I will discuss this later when I explain more about the possibility of defining local measures. You can read the MS documentation on modifying Measures directly from the DAX query view. You can find a link to the documentation in the References section below. In this article, I will use DAX Studio only. Simple queries The simplest query is to get all columns and all rows from a table: EVALUATE     Customer This query returns the entire Customer table: Figure 1 – Simple query on the Customer table. The number of returned rows can be found in the bottom right corner of DAX Studio, as well as the position of the cursor in the Query (Figure by the Author) If I want to query the result of a single value, for example, a Measure, I must define a table, as EVALUATE requires a table as input. Curly brackets do this. Therefore, the query for a Measure looks like this: EVALUATE<br>     { [Online Customer Count]} The result is one single value: Figure 2 – Querying a Measure with Curly brackets to define a table (Figure by the Author) Get only the first 10 rows It’s not unusual to have tables with thousands or even millions of rows. So, what if I want to see the first 10 rows to glimpse the data inside the table? For this, TOPN() does the trick. TOPN() accepts a sorting order. However, it doesn’t sort the data; it only looks at the values and gets the first or last rows according to the sorting criteria. For example, let’s get the ten customers with the latest birthdate (Descending order): EVALUATE<br>    TOPN(10<br>        ,Customer<br>        ,Customer[BirthDate]<br>        ,DESC) This is the result: Figure 3 – Here, the result of TOPN() is used to get the top 10 rows by birthdate. See, that 11 rows are returned, as there are customers with the same birthdate (Figure by the Author) The DAX.guide article on TOPN() states the following about ties in the resulting data: If there is a tie in OrderBy_Expression values at the N-th row of the table, then all tied rows are returned. Then, when there are ties at the N-th row, the function might return more than n rows. This explains why we get 11 rows from the query. When sorting the output, we will see the tie for the last value, November 26, 1980. To have the result sorted by the Birthdate, you must add an ORDER BY: EVALUATE<br>    TOPN(10<br>        ,Customer<br>        ,Customer[BirthDate]<br>        ,DESC)<br>    ORDER BY Customer[BirthDate] DESC And here, the result: Figure 4 – Result of the same TOPN() query as before, but with an ORDER BY to sort the output of the query by the Birthday descending (Figure by the Author) Now, the ties at the last two rows are clearly visible. Adding columns Usually, I want to select only a subset of all columns in a table. If I query multiple columns, I will only get the distinct values of the existing combination of values in both columns. This differs from other query languages, like SQL, where I must explicitly define that I want to remove duplicates, for example with DISTINCT. DAX has multiple functions to get a subset of columns from a table: ADDCOLUMNS() SELECTCOLUMNS() SUMMARIZE() SUMMARIZECOLUMNS() Of these four, SUMMARIZECOLUMNS() is the most useful for general purposes. When trying these four functions, be cautious when using ADDCOLUMNS(), as this function can result in unexpected results. Read this SQLBI article for more details. OK, how can we use SUMMARIZECOLUMNS() in a query: EVALUATE<br>    SUMMARIZECOLUMNS('Customer'[CustomerType]) This is the result: Figure 5 – Getting the Distinct values of CustomerType with SUMMARIZECOLUMNS() (Figure by the Author) As described above, we get only the distinct values of the CustomerType column. When querying multiple columns, the result is the distinct combinations of the existing data: Figure 6 – Getting multiple columns (Figure by the Author) Now, I can add a Measure to the Query, to get the number of Customers per combination: EVALUATE<br>    SUMMARIZECOLUMNS('Customer'[CustomerType]<br>                        ,Customer[Gender]<br>                        ,"Number of Customers", [Online Customer Count]) As you can see, a label must be added for the Measure. This applies to all calculated columns added to a query. This is the result of the query above: Figure 7 – Result of the query with multiple columns and a Measure (Figure by the Author) You can add as many columns and measures as you need. Adding filters The function CALCULATE() is well-known for adding filters to a Measure. For queries, we can use the CALCULATETABLE() function, which works like CALCULATE(); only the first argument must be a table. Here, the same query as before, only that the Customer-Type is filtered to include only “Persons”: EVALUATE<br>CALCULATETABLE(<br>    SUMMARIZECOLUMNS('Customer'[CustomerType]<br>                        ,Customer[Gender]<br>                        ,"Number of Customers", [Online Customer Count])<br>                ,'Customer'[CustomerType] = "Person"<br>                ) Here, the result: Figure 8 – Query and result to filter the Customer-Type to “Person” (Figure by the Author) It is possible to add filters directly to SUMMARIZECOLUMNS(). The queries generated by Power BI use this approach. But it is much more complicated than using CALCULATETABLE(). You can find examples for this approach on the DAX.guide page for SUMMARIZECOLUMNS(). Power BI uses this approach when building queries from the visualisations. You can get the queries from the Performance Analyzer in Power BI Desktop. You can read my piece about collecting performance data to learn how to use Performance Analyzer to get a query from a Visual. You can also read the Microsoft documentation linked below, which explains this. Defining Local Measures From my point of view, this is one of the most powerful features of DAX queries: Adding Measures local to the query. The DEFINE statement exists for this purpose. For example, we have the Online Customer Count Measure. Now, I want to add a filter to count only customers of the type “Person”. I can modify the code in the data model or test the logic in a DAX query. The first step is to get the current code from the data model in the existing query. For this, I must place the cursor on the first line of the query. Ideally, I will add an empty line to the query. Now, I can use DAX Studio to extract the code of the Measure and add it to the Query by right-clicking on the Measure and clicking on “Define Measure”: Figure 9 – Use the “Define Measure” feature of DAX Studio to extract the DAX code for a Measure (Figure by the Author) The same feature is also available in Power BI Desktop. Next, I can change the DAX code of the Measure by adding the Filter: DEFINE <br>---- MODEL MEASURES BEGIN ----<br>MEASURE 'All Measures'[Online Customer Count] =<br>    CALCULATE(DISTINCTCOUNT('Online Sales'[CustomerKey])<br>                ,'Customer'[CustomerType] = "Person"<br>                )<br>---- MODEL MEASURES END ---- When executing the query, the local definition of the Measure is used, instead of the DAX code stored in the data model: Figure 10 – Query and results with the modified DAX code for the Measure (Figure by the Author) Once the DAX code works as expected, you can take it and modify the Measure in Power BI Desktop. The DAX query view in Power BI Desktop is advantageous because you can directly right-click the modified code and add it back to the data model. Refer to the link in the References section below for instructions on how to do this. DAX Studio doesn’t support this feature. Putting the pieces together OK, now let’s put the pieces together and write the following query: I want to get the top 5 products ordered by customers. I take the query from above, change the query to list the Product names, and add a TOPN(): DEFINE  ---- MODEL MEASURES BEGIN ---- MEASURE 'All Measures'[Online Customer Count] =     CALCULATE(DISTINCTCOUNT('Online Sales'[CustomerKey])                 ,'Customer'[CustomerType] = "Person"                 ) ---- MODEL MEASURES END ---- EVALUATE     TOPN(5         ,SUMMARIZECOLUMNS('Product'[ProductName]                         ,"Number of Customers", [Online Customer Count]                         )         ,[Number of Customers]         ,DESC)     ORDER BY [Number of Customers] Notice that I pass the measure’s label, “Number of Customers”, instead of its name. I must do it this way, as DAX replaces the measure’s name with the label. Therefore, DAX has no information about the Measure and only knows the label. This is the result of the query: Figure 11 – The query result using TOPN() combined with a Measure. Notice that the label is used instead of the Measures name (Figure by the Author) Conclusion I often use queries in DAX Studio, as it is much easier for Data Validation. DAX Studio allows me to directly copy the result into the Clipboard or write it in an Excel file without explicitly exporting the data. This is extremely useful when creating a result set and sending it to my client for validation. Moreover, I can modify a Measure without changing it in Power Bi Desktop and quickly validate the result in a table. I can use a Measure from the data model, temporarily create a modified version, and validate the results side-by-side. DAX queries have endless use cases and should be part of every Power BI developer’s toolkit. I hope that I was able to show you something new and explain why knowing how to write DAX queries is important for a Data model developer’s daily life. References Microsoft’s documentation about applying changes from the DAX Query view on the model: Update model with changes – DAX query view – Power BI | Microsoft Learn Like in my previous articles, I use the Contoso sample dataset. You can download the ContosoRetailDW Dataset for free from Microsoft here. The Contoso Data can be freely used under the MIT License, as described in this document. I changed the dataset to shift the data to contemporary dates. The post How to Write Queries for Tabular Models with DAX appeared first on Towards Data Science.
    0 Σχόλια 0 Μοιράστηκε 40 Views
  • WWW.USINE-DIGITALE.FR
    Meta accélère sur l'IA multimodale et l'agentique avec une série de modèles dédiés
    Meta avance sur la voie de l'intelligence artificielle. Ou plutôt sur ce qu'il nomme "l'Advanced Machine Intelligence (AMI)". Son laboratoire...
    0 Σχόλια 0 Μοιράστηκε 48 Views
  • WWW.USINE-DIGITALE.FR
    Overland AI dévoile Ultra, un véhicule autonome tout-terrain pour les opérations militaires
    Overland AI, start-up américaine spécialisée dans le développement de technologies de conduite autonome dans les domaines de la défense et de...
    0 Σχόλια 0 Μοιράστηκε 44 Views
  • WWW.LEMONDE.FR
    « Génération recalée » : l’angoisse du rejet à l’ère des algorithmes
    « Génération recalée » : l’angoisse du rejet à l’ère des algorithmes Parcoursup, Mon master, sites de recrutement, réseaux sociaux et même sites de rencontre : la génération Z est confrontée quotidiennement à l’usage des plateformes et des algorithmes. Leur fonctionnement automatisé et l’absence de réponse engendrent chez certains un sentiment de rejet et d’abandon. Article réservé aux abonnés YIMENG SUN « Tu as l’impression que tu n’existes pas pour la plateforme, alors que ton avenir en dépend », raconte, avec amertume, Shona (toutes les personnes citées préfèrent rester anonymes), 21 ans, quand elle mentionne son inscription à un master de psychologie sur Mon master, la plateforme nationale qui arbitre désormais l’entrée en master. Comme la majorité de ses camarades de 3e année de licence de psychologie à l’université de Nancy, elle candidate en mars 2024 à une dizaine de formations sur cette plateforme. Dans les jours qui suivent, elle apprend par réponse automatique qu’elle est refusée à sept d’entre elles, dont ses premiers choix. « A ce moment-là, j’ai pleuré devant mon écran. C’est l’un des plus gros moments d’anxiété de ma vie. » Ce n’est que des semaines plus tard que « le miracle » se produit. L’étudiante boursière est acceptée en master à l’université de Nice, un de ses derniers choix, à 1 000 kilomètres de chez elle. Comme Shona, la génération Z, les personnes nées entre la fin des années 1990 et le début des années 2010, a pour particularité de n’avoir jamais connu le monde sans Internet, sans Mon master – qui a encore laissé des dizaines de milliers d’étudiants sans affectation de master en 2024 –, sans Parcoursup, sans les plateformes de recrutement RH et sans les applications de rencontre. Le destin de toute une jeunesse passe immanquablement par les mailles d’algorithmes qui calculeront pour eux ce à quoi ils peuvent aspirer. « La génération ghostée », titrait à ce sujet Business Insider le 15 mars. La journaliste américaine Delia Cai estimait que la « gen Z » pourrait bien être celle « qu’on recale le plus de l’histoire ». Il vous reste 75.53% de cet article à lire. La suite est réservée aux abonnés.
    0 Σχόλια 0 Μοιράστηκε 32 Views