• 400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors

    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    #women #are #suing #pfizer #over
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next #women #are #suing #pfizer #over
    FUTURISM.COM
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistent [with] global safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    0 Σχόλια 0 Μοιράστηκε
  • Excel for Microsoft 365 cheat sheet

    Windows may get all the attention, but when you want to get real work done, you turn to the applications that run on it. And if you use spreadsheets, that generally means Excel.

    Excel is, of course, part of Microsoft’s Office suite of productivity tools. Microsoft sells Office under two models: Individuals and businesses can pay for the software license up front and own it forever, or they can purchase a Microsoft 365 subscription, which means they have access to the software for only as long as they keep paying the subscription fee.

    When you purchase a perpetual version of the suite — say, Office 2021 or Office 2024 — its applications will never get new features, whereas Microsoft 365 apps are continually updated with new features. For more details, see our in-depth comparison of the two Office models.

    This cheat sheet gets you up to speed on the features that have been introduced or changed in Microsoft 365’s Excel for Windows desktop client over the past few years.We’ll periodically update this story as new features roll out.

    In this article

    Use the Ribbon

    Search to get tasks done quickly

    Explore Excel’s advanced chart types

    Collaborate in real time

    Take advantage of linked data

    Make your own custom views of a worksheet

    Create dynamic arrays and charts

    Use AutoSave to provide a safety net as you work

    Review or restore earlier versions of a spreadsheet

    Try out Microsoft 365 Copilot in Excel — but don’t expect too much

    Other new features to check out

    Use keyboard shortcuts

    Use the Ribbon

    The Ribbon interface, which puts commonly used commands in a tabbed toolbar running across the top of the application window, is alive and well in the current version of Excel. Microsoft has tweaked the Ribbon’s looks numerous times over the years, but it still works the same way it always has: just click one of the Ribbon’s tabs to see related commands on the toolbar. For example, click Insert to find buttons for inserting tables, PivotTables, charts, and more.

    Through the years, Excel’s Ribbon has gotten a variety of cosmetic changes, but it still works largely the way it always has.
    Preston Gralla / Foundry

    Just as in previous versions of Excel, if you want the Ribbon commands to go away, press Ctrl-F1 or click the name of the tab you’re currently on.To make the commands reappear, press Ctrl-F1 again or click any tab name.

    You’ve got other options for displaying the Ribbon as well. To get to them, click the Ribbon display options iconon the bottom of the Ribbon at the far right, just below the Share button. A drop-down menu appears with these four options:

    Full-screen mode: This makes Excel take up your entire screen and hides the Ribbon. To get out of full-screen mode, click the three-dot icon at the upper right of the screen.

    Show tabs only: This shows the tabs but hides the commands underneath them. It’s the same as pressing Ctrl-F1. To display the commands underneath the tabs when they’re hidden, press Ctrl-F1, click a tab, or click the Ribbon display options down arrow and select Always show Ribbon.

    Always show Ribbon: This displays the entire Ribbon, both the tabs and commands underneath them.

    Show/Hide Quick Access toolbar: This displays or hides the Quick Access toolbar, which gives you fast access to Excel commands you want to have available no matter which tab you’re on. When you enable the toolbar, it starts off empty. To populate it, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, choose which features to put on it. If you don’t see a command you want, click More Commands. Find the command you want on the left and click Add.

    You can have the toolbar appear either at the top of the screen, just to the right of the AutoSave button, or just underneath the Ribbon. To move it from one place to another, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, select either Show below the Ribbon or Show above the Ribbon. 

    Microsoft has for many years teased a simplified version of the Ribbon that hides most of the commands to reduce clutter. That simplified Ribbon is available in the Excel web app, but there’s currently no sign that it will appear in the Excel desktop app.

    There’s a useful feature in what Microsoft calls the backstage area that appears when you click the File tab on the Ribbon. If you click Open or a Copy from the menu on the left, you can see the cloud-based services you’ve connected to your Office account, such as SharePoint and OneDrive. Each location displays its associated email address underneath it. This is quite helpful if you use a cloud service with more than one account, such as if you have one OneDrive account for personal use and another one for business. You’ll be able to see at a glance which is which.

    Click the Add a service dropdown to add another cloud storage account.
    Preston Gralla / Foundry

    Search to get tasks done quickly

    Excel has never been the most user-friendly of applications, and it has so many powerful features it can be tough to keep track of them all. That’s where the handy Search feature comes in.

    To use it, click in the Search box — it’s above the Ribbon in the green title area.Then type in a task you want to do. If you want to summarize your spreadsheet data using a PivotTable, for example, type in something like summarize with pivot table. You’ll get a menu showing potential matches for the task. In this instance, the top result is a direct link to the form for summarizing with a PivotTable — select it and you’ll start your task right away, without having to go to the Ribbon’s Insert tab first.

    The search box makes it easy to perform just about any task in Excel.
    Preston Gralla / Foundry

    If you’d like more information about your task, the final items that appear in the menu let you select from related Help topics.

    Even if you consider yourself a spreadsheet jockey, it’s worth your while to try out the enhanced search function. It’s a big time-saver, and far more efficient than hunting through the Ribbon to find a command.

    Also useful is that it remembers the features you’ve previously clicked on in the box, so when you click in it, you first see a list of previous tasks you’ve searched for. That makes sure that tasks that you frequently perform are always within easy reach. And it puts tasks you rarely do within easy reach as well.

    Users of enterprise and education editions of Microsoft 365 can also use the Search box to find people in their organization, SharePoint resources, and other personalized results from within Excel.Explore Excel’s advanced chart types

    Charts are great for visualizing and presenting spreadsheet data, and for gaining insights from it. To that end, Microsoft has introduced a number of advanced chart types over the past several years, including most notably a histogram, a “waterfall” that’s effective at showing running financial totals, and a hierarchical treemap that helps you find patterns in data.

    Note that the new charts are available only if you’re working in an .xlsx document. If you use the older .xls format, you won’t find them.

    To see all the charts, put your cursor in a cell or group of cells that contains data, select Insert > Recommended Charts and click the All Charts tab. You’ll find the newer charts, mixed in with the older ones. Select any to create the chart.Excel includes several advanced chart types, including waterfall.
    Preston Gralla / Foundry

    These are the new chart types:

    Treemap. This chart type creates a hierarchical view of your data, with top-level categoriesshown as rectangles, and with subcategoriesshown as smaller rectangles grouped inside the larger ones. Thus, you can easily compare the sizes of top-level categories and subcategories in a single view. For instance, a bookstore can see at a glance that it brings in more revenue from 1st Readers, a subcategory of Children’s Books, than for the entire Non-fiction top-level category.

    srcset=" 830w, 300w, 768w, 264w, 132w, 753w, 565w, 392w" width="830" height="529" sizes="100vw, 830px">A treemap chart lets you easily compare top-level categories and subcategories in a single view.
    Preston Gralla / Foundry

    Sunburst. This chart type also displays hierarchical data, but in a multi-level pie chart. Each level of the hierarchy is represented by a circle. The innermost circle contains the top-level categories, the next circle out shows subcategories, the circle after that subsubcategories and so on.

    Sunbursts are best for showing the relationships among categories and subcategories, while treemaps are better at showing the relative sizes of categories and subcategories.

    A sunburst chart shows hierarchical data such as book categories and subcategories as a multi-level pie chart.
    Preston Gralla / Foundry

    Waterfall. This chart type is well-suited for visualizing financial statements. It displays a running total of the positive and negative contributions toward a final net value.

    A waterfall chart shows a running total of positive and negative contributions, such as revenue and expenses, toward a final net value.
    Preston Gralla / Foundry

    Histogram. This kind of chart shows frequencies within a data set. It could, for example, show the number of books sold in specific price ranges in a bookstore.

    Histograms are good for showing frequencies, such as number of books sold at various price points.
    Preston Gralla / Foundry

    Pareto. This chart, also known as a sorted histogram, contains bars as well as a line graph. Values are represented in descending order by bars. The cumulative total percentage of each bar is represented by a rising line. In the bookstore example, each bar could show a reason for a book being returned. The chart would show, at a glance, the primary reasons for returns, so a bookstore owner could focus on those issues.

    Note that the Pareto chart does not show up when you select Insert > Recommended Charts > All Charts. To use it, first select the data you want to chart, then select Insert > Insert Statistic Chart, and under Histogram, choose Pareto.

    In a Pareto chart, or sorted histogram, a rising line represents the cumulative total percentage of the items being measured. In this example, it’s easy to see that more than 80% of a bookstore’s returns are attributable to three problems.
    Preston Gralla / Foundry

    Box & Whisker. This chart, like a histogram, shows frequencies within a data set but provides for a deeper analysis than a histogram. For example, in a bookstore it could show the distribution of prices of different genres of books. In the example shown here, each “box” represents the first to third quartile of prices for books in that genre, while the “whiskers”show the upper and lower range of prices. Outliers that are priced outside the whiskers are shown as dots, the median price for each genre is shown with a horizontal line across the box, and the mean price is shown with an x.

    Box & Whisker charts can show details about data ranges such as the first to third quartile in the “boxes,” median and mean inside the boxes, upper and lower range with the “whiskers,” and outliers with dots.Preston Gralla / Foundry

    Funnel. This chart type is useful when you want to display values at multiple stages in a process. A funnel chart can show the number of sales prospects at every stage of a sales process, for example, with prospects at the top for the first stage, qualified prospects underneath it for the second stage, and so on, until you get to the final stage, closed sales. Generally, the values in funnel charts decrease with each stage, so the bars in the chart look like a funnel.

    Funnel charts let you display values at multiple stages in a process.
    Preston Gralla / Foundry

    When creating the data for a funnel chart, use one column for the stages in the process you’re charting, and a second column for the values for each stage. Once you’ve done that, to create the chart, select the data, then select Insert > Recommended Charts > All Charts > Funnel.

    Map. Map charts do exactly what you think they should: They let you compare data across different geographical regions, such as countries, regions, states, counties, or postal codes. Excel will automatically recognize the regions and create a map that visualizes the data.

    You can compare data across different locations with a map chart.
    Preston Gralla / Foundry

    To create a map chart, select the data you want to chart, then select Insert > Maps, then select the map chart. Note that in some instances, Excel might have a problem creating the map — for example, if there are multiple locations with the same name as one that you’re mapping. If that occurs, you’ll have to add one or more columns with details about the locations. If, say, you’re charting towns in the United Kingdom, you would have to include columns for the county and country each town is located in.

    Collaborate in real time

    For those who frequently collaborate with others, a welcome feature in Excel for Microsoft 365 is real-time collaboration that lets people work on spreadsheets together from anywhere in the world with an internet connection. Microsoft calls this “co-authoring.”

    Note that in order to use co-authoring, the spreadsheet must be stored in OneDrive, OneDrive for Business, or SharePoint Online, and you must be logged into your Microsoft 365 account. Also, co-authoring works in Excel only if you have AutoSave turned on. To do it, choose the On option on the AutoSave slider at the top left of the screen.

    To share a spreadsheet so you can collaborate on it with others: first open it, then click the Share button on the upper-right of the Excel screen. The “Send link” window pops up. Here you can send an email with a link where others can access the spreadsheet.

    Use the “Send link” pane to share a document and the “Link settings” pane to fine-tune its access permissions.
    Preston Gralla / Foundry

    Enter the email address of the person with whom you want to share in the text box. Enter multiple addresses, separated by commas, if you want to share the workbook with multiple people.

    One feature I found particularly useful when adding email addresses: As you type, Excel looks through your corporate or personal address book and lists the names and addresses of contacts who match the text you’ve input. Click the address you want to add. This not only saves you a bit of time but helps make sure you don’t incorrectly type in addresses.

    Next, decide whether anyone with the link can access the file, or only those whose email addresses you enter. If you see the text “Anyone with the link can edit” near the top of the pane, you can change that by clicking it, then choosing Specific people on the screen that appears. Similarly, if “Specific people” appears above the email addresses, you can change that by clicking it, then choosing Anyone with the link can edit from the screen that appears.On this second screen you can also set the document to read-only for everybody, or allow everybody to edit it. In the “Other settings” section, click the down arrow and choose either Can edit, which allows full editing, or Can view, which is read-only. If you want to give certain people editing privileges and others view-only privileges, you can send two separate invitations with different rights selected.

    On this screen you can also set an expiration date after which people won’t be able to access the file, and you can set a password so that only people who have the password can access it. When you’ve made your selections, click Apply.

    Back in the main “Send link” screen, you can send a message along with the link by typing it into the Message box. Then click Send. An email is sent to all the recipients with a link they can click to open the document.

    Your collaborators will get an email like this when you share a spreadsheet.
    Preston Gralla / FoundryThere’s another way to share a file stored in a personal OneDrive for collaboration: In the “Copy link” area at the bottom of the “Send link” pane, click Copy. When you do that, you can copy the link and send it to someone yourself via email. Note that you have the same options for setting access and editing permissions as you do if you have Excel send the link directly for you. Just click Anyone with the link can edit or Specific people below “Copy link,” and follow the instructions above.

    To begin collaborating: When your recipients receive the email and click to open the spreadsheet, they’ll open it in the web version of Excel in a browser, not in the desktop version of Excel. If you’ve granted them edit permissions, they can begin editing immediately in the browser or else click Editing > Open in Desktop App on the upper right of the screen to work in the Excel desktop client. Excel for the web is less powerful and polished than the desktop client, but it works well enough for real-time collaboration.

    As soon as any collaborators open the file, you’ll see a colored cursor that indicates their presence in the file. Each person collaborating gets a different color. Hover your cursor over a colored cell that indicates someone’s presence, and you’ll see their name. Once they begin editing the workbook, such as entering data or a formula into a cell, creating a chart, and so on, you see the changes they make in real time. Your cursor also shows up on their screen as a color, and they see the changes you make.

    You can easily see where collaborators are working in a shared worksheet.
    Preston Gralla / Foundry

    Collaboration includes the ability to make comments in a file, inside individual cells, without actually changing the contents of the cell. To do it, right-click a cell, select New Comment and type in your comment. Everyone collaborating can see that a cell has a comment in it — it’s indicated by a small colored notch appearing in the upper right of the cell. The color matches the person’s collaboration color.

    To see someone’s comment in a cell, hover your cursor over the cell or put your cursor in the cell and you’ll see the comment, the name of the person who made the comment, and a Reply box you can use to send a reply. You can also click the Comments button on the upper right of the screen to open the Comments pane, which lists every comment by every person. Click any comment to jump to the cell it’s in. You can also reply when you click a comment in the pane.

    You can make see comments that other people make, and make comments yourself.
    Preston Gralla / Foundry

    Take advantage of linked data

    Excel for Microsoft 365 has a feature that Microsoft calls “linked data types.” Essentially, they’re cells that are connected to an online sourcethat automatically updates their information — for example, a company’s current stock price. As I write this, there are nearly approximately 100 linked data types, including not just obvious data types such as stocks, geography, and currencies, but many others, including chemistry, cities, anatomy, food, yoga, and more.

    To use them, type the items you want to track into cells in a single column. For stocks, for example, you can type in a series of stock ticker symbols, company names, fund names, etc. After that, select the cells, then on the Ribbon’s Data tab, select Stocks in the Data Types section in the middle.Excel automatically converts the text in each cell into the matching data source — in our example, into the company name and stock ticker.

    Excel also adds a small icon to the left edge of each cell identifying it as a linked cell. Click any icon and a data card will pop up showing all sorts of information about the kind of information you’ve typed in.  For instance, a stock data card shows stock-related information such as current price, today’s high and low, and 52-week high and low, as well as general company information including industry and number of employees. A location card shows the location’s population, capital, GDP, and so on.

    You can build out a table using data from the data card. To do so, select the cells again, and an Insert Data button appears. Click the button, then select the information you want to appear, such as Price for the current stock price, or Population for the population of a geographic region.

    srcset=" 620w, 300w, 172w, 86w, 491w, 368w, 256w" width="620" height="606" sizes="100vw, 620px">Linked data types let you insert information, such as a company’s high and low stock prices, that is continually updated.
    Preston Gralla / Foundry

    Excel will automatically add a column to the right populated with the latest information for each item you’re tracking, and will keep it updated. You can click the Insert Data button multiple times to keep adding columns to the right for different types of data from the item’s data card.  It’s helpful to add column headers so you know what each column is showing.

    Make your own custom views of a worksheet

    Sheet Views let you make a copy of a sheet and then apply filtered or sorted views of the data to the new sheet. It’s useful when you’re working with other people on a spreadsheet, and someone wants to create a customized view without altering the original sheet. You can all create multiple custom-filtered/sorted views for a sheet. Once you’ve saved a sheet view, anyone with access to the spreadsheet can see it.

    Note: To use this feature, your spreadsheet must be stored in OneDrive.

    Sheet views work best when your data is in table format. Select the data, then go to the Ribbon toolbar and click the Insert tab. Near the left end of the Insert toolbar, click the Table button and then OK.

    To create a new sheet view, click the Ribbon’s View tab, then click the New button in the Sheet View area at the far left. The row numbers and column letters at the left and top of your spreadsheet turn black to let you know you’re in a new sheet view. In the Sheet View area of the Ribbon, it says Temporary View, the default name given to a new sheet view before you’ve saved it.

    Here’s a sheet view with data sorted from highest to lowest costs.
    Preston Gralla / Foundry

    Now apply whatever sorting and filtering you like to the data.To save this view, click the Keep button in the Sheet View area of the Ribbon. When you do that, it is saved as “View1” by default. You can click View1 and type in a more meaningful name for the view. When you click Exit on this toolbar, you return to your spreadsheet, and the row numbers and columns on the left and top of the spreadsheet are no longer black.

    To switch from one sheet view to another, click the View tab. At the left of the Ribbon toolbar, click the down arrow next to the name of the current viewto open a dropdown list of the sheet views created for the spreadsheet. Click the name of a sheet view to switch to it. Whenever you’re looking at a sheet view, the row numbers and column letters framing your spreadsheet remain black to indicate that you’re in a sheet view, not the original spreadsheet.

    Create dynamic arrays and charts

    Dynamic arrays let you write formulas that return multiple values based on your data. When data on the spreadsheet is updated, the dynamic arrays automatically update and resize themselves.

    To create a dynamic array, first create a table as outlined in the previous tip. Make sure to include a column that lists categories. Also put in at least one column to its right that lists corresponding values. Put a header at the top of each column.

    So, for example, if you’re creating a spreadsheet for a business trip budget, Column A might list expenses, such as plane tickets, meals, hotel, etc., and Column B could list each item’s cost on the same row.

    Once you’ve set up the table, use a dynamic array function on it, such as FILTER, SORT, or UNIQUE to create a dynamic array next to the table. Here’s an example of a formula for using the FILTER function:

    =FILTERThis tells Excel to show only the items that cost less than in the array.

    The FILTER function created a data array showing only the items with costs below Preston Gralla / Foundry

    Now, whenever the data in your source table changes, the dynamic array updates and resizes itself to accommodate the changes. That means the dynamic array is always up to date. So in our example, if you add new items with values under to the table, the dynamic array will enlarge itself and include those new items.

    In the same way, you can use the SORT function to sort data and the UNIQUE function to remove duplicate data.You create a dynamic chart from the dynamic array in the same way you do any other Excel chart. Select the cells from the dynamic array that you want to chart, then select the Insert tab and select the type of chart you want to add. When the source data changes in a way that affects the dynamic array that the chart is based on, both the dynamic array and the chart will be updated.

    Use AutoSave to provide a safety net as you work

    If you’re worried that you’ll lose your work on a worksheet because you don’t constantly save it, you’ll welcome the AutoSave feature. It automatically saves your files for you, so you won’t have to worry about system crashes, power outages, Excel crashes and similar problems. It only works only on documents stored in OneDrive, OneDrive for Business, or SharePoint Online. It won’t work with files saved in the older .xls format or files you save to your hard drive.

    AutoSave is a vast improvement over the previous AutoRecover feature built into Excel. AutoRecover doesn’t save your files in real time; instead, every several minutes it saves an AutoRecover file that you can try to recover after a crash. It doesn’t always work, though — for example, if you don’t properly open Excel after the crash, or if the crash doesn’t meet Microsoft’s definition of a crash. In addition, Microsoft notes, “AutoRecover is only effective for unplanned disruptions, such as a power outage or a crash. AutoRecover files are not designed to be saved when a logoff is scheduled or an orderly shutdown occurs.” And the files aren’t saved in real time, so you’ll likely lose several minutes of work even if all goes as planned.

    AutoSave is turned on by default in Excel for Microsoft 365 .xlsx workbooks stored in OneDrive, OneDrive for Business, or SharePoint Online. To turn it offfor a workbook, use the AutoSave slider on the top left of the screen. If you want AutoSave to be off for all files by default, select File > Options > and uncheck the box marked AutoSave files stored in the Cloud by default on Excel.

    Using AutoSave may require some rethinking of your workflow. Many people are used to creating new worksheets based on existing ones by opening the existing file, making changes to it, and then using As to save the new version under a different name, leaving the original file intact. Be warned that doing this with AutoSave enabled will save your changes in the original file. Instead, Microsoft suggests opening the original file and immediately selecting File > a Copyto create a new version.

    If AutoSave does save unwanted changes to a file, you can always use the Version History feature described below to roll back to an earlier version.

    Review or restore earlier versions of a spreadsheet

    There’s an extremely useful feature hiding in the title bar in Excel for Microsoft 365: You can use Version History to go back to previous versions of a file, review them, compare them side-by-side with your existing version, and copy and paste from an older file to your existing one. You can also restore an entire old version.

    To do it, click the file name at the top of the screen in an open file. A drop-down menu appears. Click Version History, and the Version History pane appears on the right side of the screen with a list of the previous versions of the file, including the time and date they were saved.Use Version History to see all previous versions of a spreadsheet, copy and paste from an older file to your existing one, or restore an entire old version.
    Preston Gralla / Foundry

    In the Version History pane, click Open version under any older version, and that version appears as a read-only version in a new window. Scroll through the version and copy any content you want, then paste it into the latest version of the file. To restore the old version, overwriting the current one, click the Restore button.

    Try out Microsoft 365 Copilot in Excel — but don’t expect too much

    For an additional subscription fee, business users of Excel can use Microsoft’s genAI add-in, Microsoft 365 Copilot. You can have Copilot suggest and create charts, create formulas, mine spreadsheets for data insights you might have missed, and more. If you have a Microsoft 365 Personal or Family subscription, many of those features are now bundled with your core subscription.

    To start using Copilot in Excel, open a spreadsheet and click the Copilot button at the right of the Ribbon’s Home tab. The Copilot panel will appear on the right, offering suggestions for actions it can perform, such as summarizing your data with a chart, adding formulas to the spreadsheet, or applying conditional formatting to the sheet. You can also chat with Copilot in the panel, asking questions about your data or how to perform an action yourself.

    Note that these suggestions are generic and won’t always make sense. For example, when you start with a blank worksheet and click the Copilot button, its suggestions include summarizing data using pivot tables or charts, even though there’s no data to chart or put into a table.

    Microsoft 365 Copilot can help you in multiple ways in Excel, including creating formulas and charts, mining spreadsheets for insights, and more.
    Preston Gralla / Foundry

    In my testing, I found that Copilot wasn’t particularly helpful. For example, when I asked it to summarize data using a PivotTable or chart, several times it responded, “Something went wrong. Please try again in a moment.” Then it said that I first needed to reformat parts of my spreadsheet by using the Transformfunction, and gave confusing advice on how I could do it — it wouldn’t do the task itself.When I asked it to suggest conditional formatting for my spreadsheet, which would highlight important data, it told me which data I should highlight but didn’t explain why the data was important. It also didn’t do the highlighting for me or tell me how to do it.

    I gave it one more try and asked it to perform an advanced analysis, which it would use Python to do. It certainly did something, although it was unclear what it was. It overwrote my original spreadsheet and added a section that claimed to show annual growth rates for revenue streams. But the data seemed to be incorrect.

    Perhaps advanced spreadsheet jockeys might be able to make sense of what Copilot is up to whenever they ask it for help. But mere mortal businesspeople may find it of no help at all.

    In my testing, I found Copilot not at all helpful, although spreadsheet jockeys may be able to make some sense of what it does.
    Preston Gralla / Foundry

    What’s more, Microsoft’s focus on Copilot in M365 has reduced the usefulness of Excel in some ways. For example, there used to be a handy feature called Smart Lookup that let you conduct targeted web searches from inside Excel. But at the beginning of 2025, Microsoft removed Smart Lookup from Excel, saying that the feature has been deprecated.

    Now the only way to search the web from inside Excel is via Copilot, which lacks some features of Smart Lookup — notably the ability to highlight words or phrases in a document and trigger an automatic web search. And M365 Copilot isn’t available to business customers unless they pay the additional subscription fee.

    Other features to check out

    Spreadsheet pros will be pleased with several other features and tools that have been added to Excel for Microsoft 365 over the past few years, from a quick data analysis tool to an advanced 3D mapping platform.

    Get an instant data analysis

    If you’re looking to analyze data in a spreadsheet, the Quick Analysis tool will help. Highlight the cells you want to analyze, then move your cursor to the lower right-hand corner of what you’ve highlighted. A small icon of a spreadsheet with a lightning bolt on it appears. Click it and you’ll get a variety of tools for performing instant analysis of your data. For example, you can use the tool to highlight the cells with a value greater than a specific number, get the numerical average for the selected cells, or create a chart on the fly.

    The Quick Analysis feature gives you a variety of tools for analyzing your data instantly.
    Preston Gralla / Foundry

    Translate text

    You can translate text from right within Excel. Highlight the cell whose text you want translated, then select Review > Translate. A Translator pane opens on the right. Excel will detect the words’ language at the top of the pane; you then select the language you want it translated to below. If Excel can’t detect the language of the text you chose or detects it incorrectly, you can override it.

    Easily find worksheets that have been shared with you

    It’s easy to forget which worksheets others have shared with you. In Excel for Microsoft 365 there’s an easy way to find them: Select File > Open > Shared with Me to see a list of them all. Note that this only works with OneDriveand SharePoint Online. You’ll also need to be signed into you Microsoft or work or school account.

    Predict the future with Forecast Sheet

    Using the Forecast Sheet function, you can generate forecasts built on historical data. If, for example, you have a worksheet showing past book sales by date, Forecast Sheet can predict future sales based on past ones.

    To use the feature, you must be working in a worksheet that has time-based historical data. Put your cursor in one of the data cells, go to the Data tab on the Ribbon and select Forecast Sheet from the Forecast group toward the right. On the screen that appears, you can select various options such as whether to create a line or bar chart and what date the forecast should end. Click the Create button, and a new worksheet will appear showing your historical and predicted data and the forecast chart.The Forecast Sheet feature can predict future results based on historical data.
    Preston Gralla / Foundry

    Manage data for analysis with Get & Transform

    This feature is not entirely new to Excel. Formerly known as Power Query, it was made available as a free add-in to Excel 2013 and worked only with the PowerPivot features in Excel Professional Plus. Microsoft’s Power BI business intelligence software offers similar functionality.

    Now called Get & Transform, it’s a business intelligence tool that lets you pull in, combine, and shape data from wide variety of local and cloud sources. These include Excel workbooks, CSV files, SQL Server and other databases, Azure, Active Directory, and many others. You can also use data from public sources including Wikipedia.

    Get & Transform helps you pull in and shape data from a wide variety of sources.
    Preston Gralla / Foundry

    You’ll find the Get & Transform tools together in a group on the Data tab in the Ribbon. For more about using these tools, see Microsoft’s “Getting Started with Get & Transform in Excel.”

    Make a 3D map

    Before Excel 2016, Power Map was a popular free 3D geospatial visualization add-in for Excel. Now it’s free, built into Excel for Microsoft 365, and has been renamed 3D Maps. With it, you can plot geographic and other information on a 3D globe or map. You’ll need to first have data suitable for mapping, and then prepare that data for 3D Maps.

    Those steps are beyond the scope of this article, but here’s advice from Microsoft about how to get and prepare data for 3D Maps. Once you have properly prepared data, open the spreadsheet and select Insert > 3D Map > Open 3D Maps. Then click Enable from the box that appears. That turns on the 3D Maps feature. For details on how to work with your data and customize your map, head to the Microsoft tutorial “Get started with 3D Maps.”

    If you don’t have data for mapping but just want to see firsthand what a 3D map is like, you can download sample data created by Microsoft. The screenshot shown here is from Microsoft’s Dallas Utilities Seasonal Electricity Consumption Simulation demo. When you’ve downloaded the workbook, open it up, select Insert > 3D Map > Open 3D Maps and click the map to launch it.

    With 3D Maps you can plot geospatial data in an interactive 3D map.
    Preston Gralla / Foundry

    Automate tasks

    If you have OneDrive for Business and use Excel with a commercial or educational Microsoft 365 license, you can automate tasks with the Automate tab. You’ll be able to create and edit scripts with the Code Editor, run automated tasks with a button click, and share the script with co-workers. See Microsoft’s “Office Scripts in Excel” documentation for details.

    Insert data from a picture into Excel

    There are times you may find data inside an image file that you’d like to get into Excel. Typically, you’ll have to input the data from it manually. There’s now a way to have Excel convert the information on the image into data for a worksheet.

    In the Get & Transform Data group on the Data tab, click the From Picture dropdown and select Picture From File to choose the image you want to grab data from, or Picture from Clipboard to take a screenshot of an image on your PC and then import the data. For more details, see Microsoft’s “Insert data from picture” support page.  

    Use keyboard shortcuts

    Here’s one last productivity tip: If you memorize a handful of keyboard shortcuts for common tasks in Excel, you can save a great deal of time over hunting for the right command to click on. See “Handy Excel keyboard shortcuts for Windows and Mac” for our favorites.

    This article was originally published in August 2019 and most recently updated in May 2025.

    More Excel tutorials:

    Excel basics: Get started with tables

    Excel basics: Get started with charts and sparklines

    How to use PivotTables and PivotCharts in Excel

    How to use slicers in Excel

    How to use Excel formulas and functions

    Howto use conditional formatting in Excel

    How to use Excel macros to save time and automate your work
    #excel #microsoft #cheat #sheet
    Excel for Microsoft 365 cheat sheet
    Windows may get all the attention, but when you want to get real work done, you turn to the applications that run on it. And if you use spreadsheets, that generally means Excel. Excel is, of course, part of Microsoft’s Office suite of productivity tools. Microsoft sells Office under two models: Individuals and businesses can pay for the software license up front and own it forever, or they can purchase a Microsoft 365 subscription, which means they have access to the software for only as long as they keep paying the subscription fee. When you purchase a perpetual version of the suite — say, Office 2021 or Office 2024 — its applications will never get new features, whereas Microsoft 365 apps are continually updated with new features. For more details, see our in-depth comparison of the two Office models. This cheat sheet gets you up to speed on the features that have been introduced or changed in Microsoft 365’s Excel for Windows desktop client over the past few years.We’ll periodically update this story as new features roll out. In this article Use the Ribbon Search to get tasks done quickly Explore Excel’s advanced chart types Collaborate in real time Take advantage of linked data Make your own custom views of a worksheet Create dynamic arrays and charts Use AutoSave to provide a safety net as you work Review or restore earlier versions of a spreadsheet Try out Microsoft 365 Copilot in Excel — but don’t expect too much Other new features to check out Use keyboard shortcuts Use the Ribbon The Ribbon interface, which puts commonly used commands in a tabbed toolbar running across the top of the application window, is alive and well in the current version of Excel. Microsoft has tweaked the Ribbon’s looks numerous times over the years, but it still works the same way it always has: just click one of the Ribbon’s tabs to see related commands on the toolbar. For example, click Insert to find buttons for inserting tables, PivotTables, charts, and more. Through the years, Excel’s Ribbon has gotten a variety of cosmetic changes, but it still works largely the way it always has. Preston Gralla / Foundry Just as in previous versions of Excel, if you want the Ribbon commands to go away, press Ctrl-F1 or click the name of the tab you’re currently on.To make the commands reappear, press Ctrl-F1 again or click any tab name. You’ve got other options for displaying the Ribbon as well. To get to them, click the Ribbon display options iconon the bottom of the Ribbon at the far right, just below the Share button. A drop-down menu appears with these four options: Full-screen mode: This makes Excel take up your entire screen and hides the Ribbon. To get out of full-screen mode, click the three-dot icon at the upper right of the screen. Show tabs only: This shows the tabs but hides the commands underneath them. It’s the same as pressing Ctrl-F1. To display the commands underneath the tabs when they’re hidden, press Ctrl-F1, click a tab, or click the Ribbon display options down arrow and select Always show Ribbon. Always show Ribbon: This displays the entire Ribbon, both the tabs and commands underneath them. Show/Hide Quick Access toolbar: This displays or hides the Quick Access toolbar, which gives you fast access to Excel commands you want to have available no matter which tab you’re on. When you enable the toolbar, it starts off empty. To populate it, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, choose which features to put on it. If you don’t see a command you want, click More Commands. Find the command you want on the left and click Add. You can have the toolbar appear either at the top of the screen, just to the right of the AutoSave button, or just underneath the Ribbon. To move it from one place to another, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, select either Show below the Ribbon or Show above the Ribbon.  Microsoft has for many years teased a simplified version of the Ribbon that hides most of the commands to reduce clutter. That simplified Ribbon is available in the Excel web app, but there’s currently no sign that it will appear in the Excel desktop app. There’s a useful feature in what Microsoft calls the backstage area that appears when you click the File tab on the Ribbon. If you click Open or a Copy from the menu on the left, you can see the cloud-based services you’ve connected to your Office account, such as SharePoint and OneDrive. Each location displays its associated email address underneath it. This is quite helpful if you use a cloud service with more than one account, such as if you have one OneDrive account for personal use and another one for business. You’ll be able to see at a glance which is which. Click the Add a service dropdown to add another cloud storage account. Preston Gralla / Foundry Search to get tasks done quickly Excel has never been the most user-friendly of applications, and it has so many powerful features it can be tough to keep track of them all. That’s where the handy Search feature comes in. To use it, click in the Search box — it’s above the Ribbon in the green title area.Then type in a task you want to do. If you want to summarize your spreadsheet data using a PivotTable, for example, type in something like summarize with pivot table. You’ll get a menu showing potential matches for the task. In this instance, the top result is a direct link to the form for summarizing with a PivotTable — select it and you’ll start your task right away, without having to go to the Ribbon’s Insert tab first. The search box makes it easy to perform just about any task in Excel. Preston Gralla / Foundry If you’d like more information about your task, the final items that appear in the menu let you select from related Help topics. Even if you consider yourself a spreadsheet jockey, it’s worth your while to try out the enhanced search function. It’s a big time-saver, and far more efficient than hunting through the Ribbon to find a command. Also useful is that it remembers the features you’ve previously clicked on in the box, so when you click in it, you first see a list of previous tasks you’ve searched for. That makes sure that tasks that you frequently perform are always within easy reach. And it puts tasks you rarely do within easy reach as well. Users of enterprise and education editions of Microsoft 365 can also use the Search box to find people in their organization, SharePoint resources, and other personalized results from within Excel.Explore Excel’s advanced chart types Charts are great for visualizing and presenting spreadsheet data, and for gaining insights from it. To that end, Microsoft has introduced a number of advanced chart types over the past several years, including most notably a histogram, a “waterfall” that’s effective at showing running financial totals, and a hierarchical treemap that helps you find patterns in data. Note that the new charts are available only if you’re working in an .xlsx document. If you use the older .xls format, you won’t find them. To see all the charts, put your cursor in a cell or group of cells that contains data, select Insert > Recommended Charts and click the All Charts tab. You’ll find the newer charts, mixed in with the older ones. Select any to create the chart.Excel includes several advanced chart types, including waterfall. Preston Gralla / Foundry These are the new chart types: Treemap. This chart type creates a hierarchical view of your data, with top-level categoriesshown as rectangles, and with subcategoriesshown as smaller rectangles grouped inside the larger ones. Thus, you can easily compare the sizes of top-level categories and subcategories in a single view. For instance, a bookstore can see at a glance that it brings in more revenue from 1st Readers, a subcategory of Children’s Books, than for the entire Non-fiction top-level category. srcset=" 830w, 300w, 768w, 264w, 132w, 753w, 565w, 392w" width="830" height="529" sizes="100vw, 830px">A treemap chart lets you easily compare top-level categories and subcategories in a single view. Preston Gralla / Foundry Sunburst. This chart type also displays hierarchical data, but in a multi-level pie chart. Each level of the hierarchy is represented by a circle. The innermost circle contains the top-level categories, the next circle out shows subcategories, the circle after that subsubcategories and so on. Sunbursts are best for showing the relationships among categories and subcategories, while treemaps are better at showing the relative sizes of categories and subcategories. A sunburst chart shows hierarchical data such as book categories and subcategories as a multi-level pie chart. Preston Gralla / Foundry Waterfall. This chart type is well-suited for visualizing financial statements. It displays a running total of the positive and negative contributions toward a final net value. A waterfall chart shows a running total of positive and negative contributions, such as revenue and expenses, toward a final net value. Preston Gralla / Foundry Histogram. This kind of chart shows frequencies within a data set. It could, for example, show the number of books sold in specific price ranges in a bookstore. Histograms are good for showing frequencies, such as number of books sold at various price points. Preston Gralla / Foundry Pareto. This chart, also known as a sorted histogram, contains bars as well as a line graph. Values are represented in descending order by bars. The cumulative total percentage of each bar is represented by a rising line. In the bookstore example, each bar could show a reason for a book being returned. The chart would show, at a glance, the primary reasons for returns, so a bookstore owner could focus on those issues. Note that the Pareto chart does not show up when you select Insert > Recommended Charts > All Charts. To use it, first select the data you want to chart, then select Insert > Insert Statistic Chart, and under Histogram, choose Pareto. In a Pareto chart, or sorted histogram, a rising line represents the cumulative total percentage of the items being measured. In this example, it’s easy to see that more than 80% of a bookstore’s returns are attributable to three problems. Preston Gralla / Foundry Box & Whisker. This chart, like a histogram, shows frequencies within a data set but provides for a deeper analysis than a histogram. For example, in a bookstore it could show the distribution of prices of different genres of books. In the example shown here, each “box” represents the first to third quartile of prices for books in that genre, while the “whiskers”show the upper and lower range of prices. Outliers that are priced outside the whiskers are shown as dots, the median price for each genre is shown with a horizontal line across the box, and the mean price is shown with an x. Box & Whisker charts can show details about data ranges such as the first to third quartile in the “boxes,” median and mean inside the boxes, upper and lower range with the “whiskers,” and outliers with dots.Preston Gralla / Foundry Funnel. This chart type is useful when you want to display values at multiple stages in a process. A funnel chart can show the number of sales prospects at every stage of a sales process, for example, with prospects at the top for the first stage, qualified prospects underneath it for the second stage, and so on, until you get to the final stage, closed sales. Generally, the values in funnel charts decrease with each stage, so the bars in the chart look like a funnel. Funnel charts let you display values at multiple stages in a process. Preston Gralla / Foundry When creating the data for a funnel chart, use one column for the stages in the process you’re charting, and a second column for the values for each stage. Once you’ve done that, to create the chart, select the data, then select Insert > Recommended Charts > All Charts > Funnel. Map. Map charts do exactly what you think they should: They let you compare data across different geographical regions, such as countries, regions, states, counties, or postal codes. Excel will automatically recognize the regions and create a map that visualizes the data. You can compare data across different locations with a map chart. Preston Gralla / Foundry To create a map chart, select the data you want to chart, then select Insert > Maps, then select the map chart. Note that in some instances, Excel might have a problem creating the map — for example, if there are multiple locations with the same name as one that you’re mapping. If that occurs, you’ll have to add one or more columns with details about the locations. If, say, you’re charting towns in the United Kingdom, you would have to include columns for the county and country each town is located in. Collaborate in real time For those who frequently collaborate with others, a welcome feature in Excel for Microsoft 365 is real-time collaboration that lets people work on spreadsheets together from anywhere in the world with an internet connection. Microsoft calls this “co-authoring.” Note that in order to use co-authoring, the spreadsheet must be stored in OneDrive, OneDrive for Business, or SharePoint Online, and you must be logged into your Microsoft 365 account. Also, co-authoring works in Excel only if you have AutoSave turned on. To do it, choose the On option on the AutoSave slider at the top left of the screen. To share a spreadsheet so you can collaborate on it with others: first open it, then click the Share button on the upper-right of the Excel screen. The “Send link” window pops up. Here you can send an email with a link where others can access the spreadsheet. Use the “Send link” pane to share a document and the “Link settings” pane to fine-tune its access permissions. Preston Gralla / Foundry Enter the email address of the person with whom you want to share in the text box. Enter multiple addresses, separated by commas, if you want to share the workbook with multiple people. One feature I found particularly useful when adding email addresses: As you type, Excel looks through your corporate or personal address book and lists the names and addresses of contacts who match the text you’ve input. Click the address you want to add. This not only saves you a bit of time but helps make sure you don’t incorrectly type in addresses. Next, decide whether anyone with the link can access the file, or only those whose email addresses you enter. If you see the text “Anyone with the link can edit” near the top of the pane, you can change that by clicking it, then choosing Specific people on the screen that appears. Similarly, if “Specific people” appears above the email addresses, you can change that by clicking it, then choosing Anyone with the link can edit from the screen that appears.On this second screen you can also set the document to read-only for everybody, or allow everybody to edit it. In the “Other settings” section, click the down arrow and choose either Can edit, which allows full editing, or Can view, which is read-only. If you want to give certain people editing privileges and others view-only privileges, you can send two separate invitations with different rights selected. On this screen you can also set an expiration date after which people won’t be able to access the file, and you can set a password so that only people who have the password can access it. When you’ve made your selections, click Apply. Back in the main “Send link” screen, you can send a message along with the link by typing it into the Message box. Then click Send. An email is sent to all the recipients with a link they can click to open the document. Your collaborators will get an email like this when you share a spreadsheet. Preston Gralla / FoundryThere’s another way to share a file stored in a personal OneDrive for collaboration: In the “Copy link” area at the bottom of the “Send link” pane, click Copy. When you do that, you can copy the link and send it to someone yourself via email. Note that you have the same options for setting access and editing permissions as you do if you have Excel send the link directly for you. Just click Anyone with the link can edit or Specific people below “Copy link,” and follow the instructions above. To begin collaborating: When your recipients receive the email and click to open the spreadsheet, they’ll open it in the web version of Excel in a browser, not in the desktop version of Excel. If you’ve granted them edit permissions, they can begin editing immediately in the browser or else click Editing > Open in Desktop App on the upper right of the screen to work in the Excel desktop client. Excel for the web is less powerful and polished than the desktop client, but it works well enough for real-time collaboration. As soon as any collaborators open the file, you’ll see a colored cursor that indicates their presence in the file. Each person collaborating gets a different color. Hover your cursor over a colored cell that indicates someone’s presence, and you’ll see their name. Once they begin editing the workbook, such as entering data or a formula into a cell, creating a chart, and so on, you see the changes they make in real time. Your cursor also shows up on their screen as a color, and they see the changes you make. You can easily see where collaborators are working in a shared worksheet. Preston Gralla / Foundry Collaboration includes the ability to make comments in a file, inside individual cells, without actually changing the contents of the cell. To do it, right-click a cell, select New Comment and type in your comment. Everyone collaborating can see that a cell has a comment in it — it’s indicated by a small colored notch appearing in the upper right of the cell. The color matches the person’s collaboration color. To see someone’s comment in a cell, hover your cursor over the cell or put your cursor in the cell and you’ll see the comment, the name of the person who made the comment, and a Reply box you can use to send a reply. You can also click the Comments button on the upper right of the screen to open the Comments pane, which lists every comment by every person. Click any comment to jump to the cell it’s in. You can also reply when you click a comment in the pane. You can make see comments that other people make, and make comments yourself. Preston Gralla / Foundry Take advantage of linked data Excel for Microsoft 365 has a feature that Microsoft calls “linked data types.” Essentially, they’re cells that are connected to an online sourcethat automatically updates their information — for example, a company’s current stock price. As I write this, there are nearly approximately 100 linked data types, including not just obvious data types such as stocks, geography, and currencies, but many others, including chemistry, cities, anatomy, food, yoga, and more. To use them, type the items you want to track into cells in a single column. For stocks, for example, you can type in a series of stock ticker symbols, company names, fund names, etc. After that, select the cells, then on the Ribbon’s Data tab, select Stocks in the Data Types section in the middle.Excel automatically converts the text in each cell into the matching data source — in our example, into the company name and stock ticker. Excel also adds a small icon to the left edge of each cell identifying it as a linked cell. Click any icon and a data card will pop up showing all sorts of information about the kind of information you’ve typed in.  For instance, a stock data card shows stock-related information such as current price, today’s high and low, and 52-week high and low, as well as general company information including industry and number of employees. A location card shows the location’s population, capital, GDP, and so on. You can build out a table using data from the data card. To do so, select the cells again, and an Insert Data button appears. Click the button, then select the information you want to appear, such as Price for the current stock price, or Population for the population of a geographic region. srcset=" 620w, 300w, 172w, 86w, 491w, 368w, 256w" width="620" height="606" sizes="100vw, 620px">Linked data types let you insert information, such as a company’s high and low stock prices, that is continually updated. Preston Gralla / Foundry Excel will automatically add a column to the right populated with the latest information for each item you’re tracking, and will keep it updated. You can click the Insert Data button multiple times to keep adding columns to the right for different types of data from the item’s data card.  It’s helpful to add column headers so you know what each column is showing. Make your own custom views of a worksheet Sheet Views let you make a copy of a sheet and then apply filtered or sorted views of the data to the new sheet. It’s useful when you’re working with other people on a spreadsheet, and someone wants to create a customized view without altering the original sheet. You can all create multiple custom-filtered/sorted views for a sheet. Once you’ve saved a sheet view, anyone with access to the spreadsheet can see it. Note: To use this feature, your spreadsheet must be stored in OneDrive. Sheet views work best when your data is in table format. Select the data, then go to the Ribbon toolbar and click the Insert tab. Near the left end of the Insert toolbar, click the Table button and then OK. To create a new sheet view, click the Ribbon’s View tab, then click the New button in the Sheet View area at the far left. The row numbers and column letters at the left and top of your spreadsheet turn black to let you know you’re in a new sheet view. In the Sheet View area of the Ribbon, it says Temporary View, the default name given to a new sheet view before you’ve saved it. Here’s a sheet view with data sorted from highest to lowest costs. Preston Gralla / Foundry Now apply whatever sorting and filtering you like to the data.To save this view, click the Keep button in the Sheet View area of the Ribbon. When you do that, it is saved as “View1” by default. You can click View1 and type in a more meaningful name for the view. When you click Exit on this toolbar, you return to your spreadsheet, and the row numbers and columns on the left and top of the spreadsheet are no longer black. To switch from one sheet view to another, click the View tab. At the left of the Ribbon toolbar, click the down arrow next to the name of the current viewto open a dropdown list of the sheet views created for the spreadsheet. Click the name of a sheet view to switch to it. Whenever you’re looking at a sheet view, the row numbers and column letters framing your spreadsheet remain black to indicate that you’re in a sheet view, not the original spreadsheet. Create dynamic arrays and charts Dynamic arrays let you write formulas that return multiple values based on your data. When data on the spreadsheet is updated, the dynamic arrays automatically update and resize themselves. To create a dynamic array, first create a table as outlined in the previous tip. Make sure to include a column that lists categories. Also put in at least one column to its right that lists corresponding values. Put a header at the top of each column. So, for example, if you’re creating a spreadsheet for a business trip budget, Column A might list expenses, such as plane tickets, meals, hotel, etc., and Column B could list each item’s cost on the same row. Once you’ve set up the table, use a dynamic array function on it, such as FILTER, SORT, or UNIQUE to create a dynamic array next to the table. Here’s an example of a formula for using the FILTER function: =FILTERThis tells Excel to show only the items that cost less than in the array. The FILTER function created a data array showing only the items with costs below Preston Gralla / Foundry Now, whenever the data in your source table changes, the dynamic array updates and resizes itself to accommodate the changes. That means the dynamic array is always up to date. So in our example, if you add new items with values under to the table, the dynamic array will enlarge itself and include those new items. In the same way, you can use the SORT function to sort data and the UNIQUE function to remove duplicate data.You create a dynamic chart from the dynamic array in the same way you do any other Excel chart. Select the cells from the dynamic array that you want to chart, then select the Insert tab and select the type of chart you want to add. When the source data changes in a way that affects the dynamic array that the chart is based on, both the dynamic array and the chart will be updated. Use AutoSave to provide a safety net as you work If you’re worried that you’ll lose your work on a worksheet because you don’t constantly save it, you’ll welcome the AutoSave feature. It automatically saves your files for you, so you won’t have to worry about system crashes, power outages, Excel crashes and similar problems. It only works only on documents stored in OneDrive, OneDrive for Business, or SharePoint Online. It won’t work with files saved in the older .xls format or files you save to your hard drive. AutoSave is a vast improvement over the previous AutoRecover feature built into Excel. AutoRecover doesn’t save your files in real time; instead, every several minutes it saves an AutoRecover file that you can try to recover after a crash. It doesn’t always work, though — for example, if you don’t properly open Excel after the crash, or if the crash doesn’t meet Microsoft’s definition of a crash. In addition, Microsoft notes, “AutoRecover is only effective for unplanned disruptions, such as a power outage or a crash. AutoRecover files are not designed to be saved when a logoff is scheduled or an orderly shutdown occurs.” And the files aren’t saved in real time, so you’ll likely lose several minutes of work even if all goes as planned. AutoSave is turned on by default in Excel for Microsoft 365 .xlsx workbooks stored in OneDrive, OneDrive for Business, or SharePoint Online. To turn it offfor a workbook, use the AutoSave slider on the top left of the screen. If you want AutoSave to be off for all files by default, select File > Options > and uncheck the box marked AutoSave files stored in the Cloud by default on Excel. Using AutoSave may require some rethinking of your workflow. Many people are used to creating new worksheets based on existing ones by opening the existing file, making changes to it, and then using As to save the new version under a different name, leaving the original file intact. Be warned that doing this with AutoSave enabled will save your changes in the original file. Instead, Microsoft suggests opening the original file and immediately selecting File > a Copyto create a new version. If AutoSave does save unwanted changes to a file, you can always use the Version History feature described below to roll back to an earlier version. Review or restore earlier versions of a spreadsheet There’s an extremely useful feature hiding in the title bar in Excel for Microsoft 365: You can use Version History to go back to previous versions of a file, review them, compare them side-by-side with your existing version, and copy and paste from an older file to your existing one. You can also restore an entire old version. To do it, click the file name at the top of the screen in an open file. A drop-down menu appears. Click Version History, and the Version History pane appears on the right side of the screen with a list of the previous versions of the file, including the time and date they were saved.Use Version History to see all previous versions of a spreadsheet, copy and paste from an older file to your existing one, or restore an entire old version. Preston Gralla / Foundry In the Version History pane, click Open version under any older version, and that version appears as a read-only version in a new window. Scroll through the version and copy any content you want, then paste it into the latest version of the file. To restore the old version, overwriting the current one, click the Restore button. Try out Microsoft 365 Copilot in Excel — but don’t expect too much For an additional subscription fee, business users of Excel can use Microsoft’s genAI add-in, Microsoft 365 Copilot. You can have Copilot suggest and create charts, create formulas, mine spreadsheets for data insights you might have missed, and more. If you have a Microsoft 365 Personal or Family subscription, many of those features are now bundled with your core subscription. To start using Copilot in Excel, open a spreadsheet and click the Copilot button at the right of the Ribbon’s Home tab. The Copilot panel will appear on the right, offering suggestions for actions it can perform, such as summarizing your data with a chart, adding formulas to the spreadsheet, or applying conditional formatting to the sheet. You can also chat with Copilot in the panel, asking questions about your data or how to perform an action yourself. Note that these suggestions are generic and won’t always make sense. For example, when you start with a blank worksheet and click the Copilot button, its suggestions include summarizing data using pivot tables or charts, even though there’s no data to chart or put into a table. Microsoft 365 Copilot can help you in multiple ways in Excel, including creating formulas and charts, mining spreadsheets for insights, and more. Preston Gralla / Foundry In my testing, I found that Copilot wasn’t particularly helpful. For example, when I asked it to summarize data using a PivotTable or chart, several times it responded, “Something went wrong. Please try again in a moment.” Then it said that I first needed to reformat parts of my spreadsheet by using the Transformfunction, and gave confusing advice on how I could do it — it wouldn’t do the task itself.When I asked it to suggest conditional formatting for my spreadsheet, which would highlight important data, it told me which data I should highlight but didn’t explain why the data was important. It also didn’t do the highlighting for me or tell me how to do it. I gave it one more try and asked it to perform an advanced analysis, which it would use Python to do. It certainly did something, although it was unclear what it was. It overwrote my original spreadsheet and added a section that claimed to show annual growth rates for revenue streams. But the data seemed to be incorrect. Perhaps advanced spreadsheet jockeys might be able to make sense of what Copilot is up to whenever they ask it for help. But mere mortal businesspeople may find it of no help at all. In my testing, I found Copilot not at all helpful, although spreadsheet jockeys may be able to make some sense of what it does. Preston Gralla / Foundry What’s more, Microsoft’s focus on Copilot in M365 has reduced the usefulness of Excel in some ways. For example, there used to be a handy feature called Smart Lookup that let you conduct targeted web searches from inside Excel. But at the beginning of 2025, Microsoft removed Smart Lookup from Excel, saying that the feature has been deprecated. Now the only way to search the web from inside Excel is via Copilot, which lacks some features of Smart Lookup — notably the ability to highlight words or phrases in a document and trigger an automatic web search. And M365 Copilot isn’t available to business customers unless they pay the additional subscription fee. Other features to check out Spreadsheet pros will be pleased with several other features and tools that have been added to Excel for Microsoft 365 over the past few years, from a quick data analysis tool to an advanced 3D mapping platform. Get an instant data analysis If you’re looking to analyze data in a spreadsheet, the Quick Analysis tool will help. Highlight the cells you want to analyze, then move your cursor to the lower right-hand corner of what you’ve highlighted. A small icon of a spreadsheet with a lightning bolt on it appears. Click it and you’ll get a variety of tools for performing instant analysis of your data. For example, you can use the tool to highlight the cells with a value greater than a specific number, get the numerical average for the selected cells, or create a chart on the fly. The Quick Analysis feature gives you a variety of tools for analyzing your data instantly. Preston Gralla / Foundry Translate text You can translate text from right within Excel. Highlight the cell whose text you want translated, then select Review > Translate. A Translator pane opens on the right. Excel will detect the words’ language at the top of the pane; you then select the language you want it translated to below. If Excel can’t detect the language of the text you chose or detects it incorrectly, you can override it. Easily find worksheets that have been shared with you It’s easy to forget which worksheets others have shared with you. In Excel for Microsoft 365 there’s an easy way to find them: Select File > Open > Shared with Me to see a list of them all. Note that this only works with OneDriveand SharePoint Online. You’ll also need to be signed into you Microsoft or work or school account. Predict the future with Forecast Sheet Using the Forecast Sheet function, you can generate forecasts built on historical data. If, for example, you have a worksheet showing past book sales by date, Forecast Sheet can predict future sales based on past ones. To use the feature, you must be working in a worksheet that has time-based historical data. Put your cursor in one of the data cells, go to the Data tab on the Ribbon and select Forecast Sheet from the Forecast group toward the right. On the screen that appears, you can select various options such as whether to create a line or bar chart and what date the forecast should end. Click the Create button, and a new worksheet will appear showing your historical and predicted data and the forecast chart.The Forecast Sheet feature can predict future results based on historical data. Preston Gralla / Foundry Manage data for analysis with Get & Transform This feature is not entirely new to Excel. Formerly known as Power Query, it was made available as a free add-in to Excel 2013 and worked only with the PowerPivot features in Excel Professional Plus. Microsoft’s Power BI business intelligence software offers similar functionality. Now called Get & Transform, it’s a business intelligence tool that lets you pull in, combine, and shape data from wide variety of local and cloud sources. These include Excel workbooks, CSV files, SQL Server and other databases, Azure, Active Directory, and many others. You can also use data from public sources including Wikipedia. Get & Transform helps you pull in and shape data from a wide variety of sources. Preston Gralla / Foundry You’ll find the Get & Transform tools together in a group on the Data tab in the Ribbon. For more about using these tools, see Microsoft’s “Getting Started with Get & Transform in Excel.” Make a 3D map Before Excel 2016, Power Map was a popular free 3D geospatial visualization add-in for Excel. Now it’s free, built into Excel for Microsoft 365, and has been renamed 3D Maps. With it, you can plot geographic and other information on a 3D globe or map. You’ll need to first have data suitable for mapping, and then prepare that data for 3D Maps. Those steps are beyond the scope of this article, but here’s advice from Microsoft about how to get and prepare data for 3D Maps. Once you have properly prepared data, open the spreadsheet and select Insert > 3D Map > Open 3D Maps. Then click Enable from the box that appears. That turns on the 3D Maps feature. For details on how to work with your data and customize your map, head to the Microsoft tutorial “Get started with 3D Maps.” If you don’t have data for mapping but just want to see firsthand what a 3D map is like, you can download sample data created by Microsoft. The screenshot shown here is from Microsoft’s Dallas Utilities Seasonal Electricity Consumption Simulation demo. When you’ve downloaded the workbook, open it up, select Insert > 3D Map > Open 3D Maps and click the map to launch it. With 3D Maps you can plot geospatial data in an interactive 3D map. Preston Gralla / Foundry Automate tasks If you have OneDrive for Business and use Excel with a commercial or educational Microsoft 365 license, you can automate tasks with the Automate tab. You’ll be able to create and edit scripts with the Code Editor, run automated tasks with a button click, and share the script with co-workers. See Microsoft’s “Office Scripts in Excel” documentation for details. Insert data from a picture into Excel There are times you may find data inside an image file that you’d like to get into Excel. Typically, you’ll have to input the data from it manually. There’s now a way to have Excel convert the information on the image into data for a worksheet. In the Get & Transform Data group on the Data tab, click the From Picture dropdown and select Picture From File to choose the image you want to grab data from, or Picture from Clipboard to take a screenshot of an image on your PC and then import the data. For more details, see Microsoft’s “Insert data from picture” support page.   Use keyboard shortcuts Here’s one last productivity tip: If you memorize a handful of keyboard shortcuts for common tasks in Excel, you can save a great deal of time over hunting for the right command to click on. See “Handy Excel keyboard shortcuts for Windows and Mac” for our favorites. This article was originally published in August 2019 and most recently updated in May 2025. More Excel tutorials: Excel basics: Get started with tables Excel basics: Get started with charts and sparklines How to use PivotTables and PivotCharts in Excel How to use slicers in Excel How to use Excel formulas and functions Howto use conditional formatting in Excel How to use Excel macros to save time and automate your work #excel #microsoft #cheat #sheet
    WWW.COMPUTERWORLD.COM
    Excel for Microsoft 365 cheat sheet
    Windows may get all the attention, but when you want to get real work done, you turn to the applications that run on it. And if you use spreadsheets, that generally means Excel. Excel is, of course, part of Microsoft’s Office suite of productivity tools. Microsoft sells Office under two models: Individuals and businesses can pay for the software license up front and own it forever (what the company calls the “perpetual” version of the suite), or they can purchase a Microsoft 365 subscription, which means they have access to the software for only as long as they keep paying the subscription fee. When you purchase a perpetual version of the suite — say, Office 2021 or Office 2024 — its applications will never get new features, whereas Microsoft 365 apps are continually updated with new features. For more details, see our in-depth comparison of the two Office models. This cheat sheet gets you up to speed on the features that have been introduced or changed in Microsoft 365’s Excel for Windows desktop client over the past few years. (If you’re looking for Excel tips for the perpetual-license Office suite, see our Office 2021 and 2024 cheat sheet.) We’ll periodically update this story as new features roll out. In this article Use the Ribbon Search to get tasks done quickly Explore Excel’s advanced chart types Collaborate in real time Take advantage of linked data Make your own custom views of a worksheet Create dynamic arrays and charts Use AutoSave to provide a safety net as you work Review or restore earlier versions of a spreadsheet Try out Microsoft 365 Copilot in Excel — but don’t expect too much Other new features to check out Use keyboard shortcuts Use the Ribbon The Ribbon interface, which puts commonly used commands in a tabbed toolbar running across the top of the application window, is alive and well in the current version of Excel. Microsoft has tweaked the Ribbon’s looks numerous times over the years, but it still works the same way it always has: just click one of the Ribbon’s tabs to see related commands on the toolbar. For example, click Insert to find buttons for inserting tables, PivotTables, charts, and more. Through the years, Excel’s Ribbon has gotten a variety of cosmetic changes, but it still works largely the way it always has. Preston Gralla / Foundry Just as in previous versions of Excel, if you want the Ribbon commands to go away, press Ctrl-F1 or click the name of the tab you’re currently on. (The tabs above the Ribbon — File, Home, Insert, and so on — stay visible.) To make the commands reappear, press Ctrl-F1 again or click any tab name. You’ve got other options for displaying the Ribbon as well. To get to them, click the Ribbon display options icon (a down arrow) on the bottom of the Ribbon at the far right, just below the Share button. A drop-down menu appears with these four options: Full-screen mode: This makes Excel take up your entire screen and hides the Ribbon. To get out of full-screen mode, click the three-dot icon at the upper right of the screen. Show tabs only: This shows the tabs but hides the commands underneath them. It’s the same as pressing Ctrl-F1. To display the commands underneath the tabs when they’re hidden, press Ctrl-F1, click a tab, or click the Ribbon display options down arrow and select Always show Ribbon. Always show Ribbon: This displays the entire Ribbon, both the tabs and commands underneath them. Show/Hide Quick Access toolbar: This displays or hides the Quick Access toolbar, which gives you fast access to Excel commands you want to have available no matter which tab you’re on. When you enable the toolbar, it starts off empty. To populate it, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, choose which features to put on it. If you don’t see a command you want, click More Commands. Find the command you want on the left and click Add. You can have the toolbar appear either at the top of the screen, just to the right of the AutoSave button, or just underneath the Ribbon. To move it from one place to another, click a small down arrow that appears at the right of the toolbar and from the drop-down menu that appears, select either Show below the Ribbon or Show above the Ribbon.  Microsoft has for many years teased a simplified version of the Ribbon that hides most of the commands to reduce clutter. That simplified Ribbon is available in the Excel web app, but there’s currently no sign that it will appear in the Excel desktop app. There’s a useful feature in what Microsoft calls the backstage area that appears when you click the File tab on the Ribbon. If you click Open or Save a Copy from the menu on the left, you can see the cloud-based services you’ve connected to your Office account, such as SharePoint and OneDrive. Each location displays its associated email address underneath it. This is quite helpful if you use a cloud service with more than one account, such as if you have one OneDrive account for personal use and another one for business. You’ll be able to see at a glance which is which. Click the Add a service dropdown to add another cloud storage account. Preston Gralla / Foundry Search to get tasks done quickly Excel has never been the most user-friendly of applications, and it has so many powerful features it can be tough to keep track of them all. That’s where the handy Search feature comes in. To use it, click in the Search box — it’s above the Ribbon in the green title area. (Keyboard fans can instead press Alt-Q.) Then type in a task you want to do. If you want to summarize your spreadsheet data using a PivotTable, for example, type in something like summarize with pivot table. You’ll get a menu showing potential matches for the task. In this instance, the top result is a direct link to the form for summarizing with a PivotTable — select it and you’ll start your task right away, without having to go to the Ribbon’s Insert tab first. The search box makes it easy to perform just about any task in Excel. Preston Gralla / Foundry If you’d like more information about your task, the final items that appear in the menu let you select from related Help topics. Even if you consider yourself a spreadsheet jockey, it’s worth your while to try out the enhanced search function. It’s a big time-saver, and far more efficient than hunting through the Ribbon to find a command. Also useful is that it remembers the features you’ve previously clicked on in the box, so when you click in it, you first see a list of previous tasks you’ve searched for. That makes sure that tasks that you frequently perform are always within easy reach. And it puts tasks you rarely do within easy reach as well. Users of enterprise and education editions of Microsoft 365 can also use the Search box to find people in their organization, SharePoint resources, and other personalized results from within Excel. (See the Microsoft Search support page for more details about all it can do.) Explore Excel’s advanced chart types Charts are great for visualizing and presenting spreadsheet data, and for gaining insights from it. To that end, Microsoft has introduced a number of advanced chart types over the past several years, including most notably a histogram (frequently used in statistics), a “waterfall” that’s effective at showing running financial totals, and a hierarchical treemap that helps you find patterns in data. Note that the new charts are available only if you’re working in an .xlsx document. If you use the older .xls format, you won’t find them. To see all the charts, put your cursor in a cell or group of cells that contains data, select Insert > Recommended Charts and click the All Charts tab. You’ll find the newer charts, mixed in with the older ones. Select any to create the chart. (For help using charts, see our guide to charts and sparklines in Excel.) Excel includes several advanced chart types, including waterfall. Preston Gralla / Foundry These are the new chart types: Treemap. This chart type creates a hierarchical view of your data, with top-level categories (or tree branches) shown as rectangles, and with subcategories (or sub-branches) shown as smaller rectangles grouped inside the larger ones. Thus, you can easily compare the sizes of top-level categories and subcategories in a single view. For instance, a bookstore can see at a glance that it brings in more revenue from 1st Readers, a subcategory of Children’s Books, than for the entire Non-fiction top-level category. srcset="https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?quality=50&strip=all 830w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=300%2C191&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=768%2C489&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=264%2C168&quality=50&strip=all 264w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=132%2C84&quality=50&strip=all 132w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=753%2C480&quality=50&strip=all 753w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=565%2C360&quality=50&strip=all 565w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel2016_chart_treemap.jpg?resize=392%2C250&quality=50&strip=all 392w" width="830" height="529" sizes="(max-width: 830px) 100vw, 830px">A treemap chart lets you easily compare top-level categories and subcategories in a single view. Preston Gralla / Foundry Sunburst. This chart type also displays hierarchical data, but in a multi-level pie chart. Each level of the hierarchy is represented by a circle. The innermost circle contains the top-level categories, the next circle out shows subcategories, the circle after that subsubcategories and so on. Sunbursts are best for showing the relationships among categories and subcategories, while treemaps are better at showing the relative sizes of categories and subcategories. A sunburst chart shows hierarchical data such as book categories and subcategories as a multi-level pie chart. Preston Gralla / Foundry Waterfall. This chart type is well-suited for visualizing financial statements. It displays a running total of the positive and negative contributions toward a final net value. A waterfall chart shows a running total of positive and negative contributions, such as revenue and expenses, toward a final net value. Preston Gralla / Foundry Histogram. This kind of chart shows frequencies within a data set. It could, for example, show the number of books sold in specific price ranges in a bookstore. Histograms are good for showing frequencies, such as number of books sold at various price points. Preston Gralla / Foundry Pareto. This chart, also known as a sorted histogram, contains bars as well as a line graph. Values are represented in descending order by bars. The cumulative total percentage of each bar is represented by a rising line. In the bookstore example, each bar could show a reason for a book being returned (defective, priced incorrectly, and so on). The chart would show, at a glance, the primary reasons for returns, so a bookstore owner could focus on those issues. Note that the Pareto chart does not show up when you select Insert > Recommended Charts > All Charts. To use it, first select the data you want to chart, then select Insert > Insert Statistic Chart, and under Histogram, choose Pareto. In a Pareto chart, or sorted histogram, a rising line represents the cumulative total percentage of the items being measured. In this example, it’s easy to see that more than 80% of a bookstore’s returns are attributable to three problems. Preston Gralla / Foundry Box & Whisker. This chart, like a histogram, shows frequencies within a data set but provides for a deeper analysis than a histogram. For example, in a bookstore it could show the distribution of prices of different genres of books. In the example shown here, each “box” represents the first to third quartile of prices for books in that genre, while the “whiskers” (the lines extending up and down from the box) show the upper and lower range of prices. Outliers that are priced outside the whiskers are shown as dots, the median price for each genre is shown with a horizontal line across the box, and the mean price is shown with an x. Box & Whisker charts can show details about data ranges such as the first to third quartile in the “boxes,” median and mean inside the boxes, upper and lower range with the “whiskers,” and outliers with dots.Preston Gralla / Foundry Funnel. This chart type is useful when you want to display values at multiple stages in a process. A funnel chart can show the number of sales prospects at every stage of a sales process, for example, with prospects at the top for the first stage, qualified prospects underneath it for the second stage, and so on, until you get to the final stage, closed sales. Generally, the values in funnel charts decrease with each stage, so the bars in the chart look like a funnel. Funnel charts let you display values at multiple stages in a process. Preston Gralla / Foundry When creating the data for a funnel chart, use one column for the stages in the process you’re charting, and a second column for the values for each stage. Once you’ve done that, to create the chart, select the data, then select Insert > Recommended Charts > All Charts > Funnel. Map. Map charts do exactly what you think they should: They let you compare data across different geographical regions, such as countries, regions, states, counties, or postal codes. Excel will automatically recognize the regions and create a map that visualizes the data. You can compare data across different locations with a map chart. Preston Gralla / Foundry To create a map chart, select the data you want to chart, then select Insert > Maps, then select the map chart. Note that in some instances, Excel might have a problem creating the map — for example, if there are multiple locations with the same name as one that you’re mapping. If that occurs, you’ll have to add one or more columns with details about the locations. If, say, you’re charting towns in the United Kingdom, you would have to include columns for the county and country each town is located in. Collaborate in real time For those who frequently collaborate with others, a welcome feature in Excel for Microsoft 365 is real-time collaboration that lets people work on spreadsheets together from anywhere in the world with an internet connection. Microsoft calls this “co-authoring.” Note that in order to use co-authoring, the spreadsheet must be stored in OneDrive, OneDrive for Business, or SharePoint Online, and you must be logged into your Microsoft 365 account. Also, co-authoring works in Excel only if you have AutoSave turned on. To do it, choose the On option on the AutoSave slider at the top left of the screen. To share a spreadsheet so you can collaborate on it with others: first open it, then click the Share button on the upper-right of the Excel screen. The “Send link” window pops up. Here you can send an email with a link where others can access the spreadsheet. Use the “Send link” pane to share a document and the “Link settings” pane to fine-tune its access permissions. Preston Gralla / Foundry Enter the email address of the person with whom you want to share in the text box. Enter multiple addresses, separated by commas, if you want to share the workbook with multiple people. One feature I found particularly useful when adding email addresses: As you type, Excel looks through your corporate or personal address book and lists the names and addresses of contacts who match the text you’ve input. Click the address you want to add. This not only saves you a bit of time but helps make sure you don’t incorrectly type in addresses. Next, decide whether anyone with the link can access the file, or only those whose email addresses you enter. If you see the text “Anyone with the link can edit” near the top of the pane, you can change that by clicking it, then choosing Specific people on the screen that appears. Similarly, if “Specific people” appears above the email addresses, you can change that by clicking it, then choosing Anyone with the link can edit from the screen that appears. (If you use a business, enterprise, or education edition of Office, your IT department may have set up different sharing permissions on these two screens, such as an option to allow anyone within your organization to edit the document. You may also need to click a Link settings button — a gear icon — to access the “Link settings” pane.) On this second screen you can also set the document to read-only for everybody, or allow everybody to edit it. In the “Other settings” section, click the down arrow and choose either Can edit, which allows full editing, or Can view, which is read-only. If you want to give certain people editing privileges and others view-only privileges, you can send two separate invitations with different rights selected. On this screen you can also set an expiration date after which people won’t be able to access the file, and you can set a password so that only people who have the password can access it. When you’ve made your selections, click Apply. Back in the main “Send link” screen, you can send a message along with the link by typing it into the Message box. Then click Send. An email is sent to all the recipients with a link they can click to open the document. Your collaborators will get an email like this when you share a spreadsheet. Preston Gralla / Foundry (If you’d rather send recipients a copy of the file as an Excel file instead of a link, and thus not allow real-time collaboration, click Send a copy at the bottom of the “Send link” screen.) There’s another way to share a file stored in a personal OneDrive for collaboration: In the “Copy link” area at the bottom of the “Send link” pane, click Copy. When you do that, you can copy the link and send it to someone yourself via email. Note that you have the same options for setting access and editing permissions as you do if you have Excel send the link directly for you. Just click Anyone with the link can edit or Specific people below “Copy link,” and follow the instructions above. To begin collaborating: When your recipients receive the email and click to open the spreadsheet, they’ll open it in the web version of Excel in a browser, not in the desktop version of Excel. If you’ve granted them edit permissions, they can begin editing immediately in the browser or else click Editing > Open in Desktop App on the upper right of the screen to work in the Excel desktop client. Excel for the web is less powerful and polished than the desktop client, but it works well enough for real-time collaboration. As soon as any collaborators open the file, you’ll see a colored cursor that indicates their presence in the file. Each person collaborating gets a different color. Hover your cursor over a colored cell that indicates someone’s presence, and you’ll see their name. Once they begin editing the workbook, such as entering data or a formula into a cell, creating a chart, and so on, you see the changes they make in real time. Your cursor also shows up on their screen as a color, and they see the changes you make. You can easily see where collaborators are working in a shared worksheet. Preston Gralla / Foundry Collaboration includes the ability to make comments in a file, inside individual cells, without actually changing the contents of the cell. To do it, right-click a cell, select New Comment and type in your comment. Everyone collaborating can see that a cell has a comment in it — it’s indicated by a small colored notch appearing in the upper right of the cell. The color matches the person’s collaboration color. To see someone’s comment in a cell, hover your cursor over the cell or put your cursor in the cell and you’ll see the comment, the name of the person who made the comment, and a Reply box you can use to send a reply. You can also click the Comments button on the upper right of the screen to open the Comments pane, which lists every comment by every person. Click any comment to jump to the cell it’s in. You can also reply when you click a comment in the pane. You can make see comments that other people make, and make comments yourself. Preston Gralla / Foundry Take advantage of linked data Excel for Microsoft 365 has a feature that Microsoft calls “linked data types.” Essentially, they’re cells that are connected to an online source (Bing) that automatically updates their information — for example, a company’s current stock price. As I write this, there are nearly approximately 100 linked data types, including not just obvious data types such as stocks, geography, and currencies, but many others, including chemistry, cities, anatomy, food, yoga, and more. To use them, type the items you want to track into cells in a single column. For stocks, for example, you can type in a series of stock ticker symbols, company names, fund names, etc. After that, select the cells, then on the Ribbon’s Data tab, select Stocks in the Data Types section in the middle. (If you had typed in geographic names such as countries, states, or cities, you would instead select Geography.) Excel automatically converts the text in each cell into the matching data source — in our example, into the company name and stock ticker. Excel also adds a small icon to the left edge of each cell identifying it as a linked cell. Click any icon and a data card will pop up showing all sorts of information about the kind of information you’ve typed in.  For instance, a stock data card shows stock-related information such as current price, today’s high and low, and 52-week high and low, as well as general company information including industry and number of employees. A location card shows the location’s population, capital, GDP, and so on. You can build out a table using data from the data card. To do so, select the cells again, and an Insert Data button appears. Click the button, then select the information you want to appear, such as Price for the current stock price, or Population for the population of a geographic region. srcset="https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?quality=50&strip=all 620w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=300%2C293&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=172%2C168&quality=50&strip=all 172w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=86%2C84&quality=50&strip=all 86w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=491%2C480&quality=50&strip=all 491w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=368%2C360&quality=50&strip=all 368w, https://b2b-contenthub.com/wp-content/uploads/2025/05/excel-microsoft365-15-linked-data-2023.jpg?resize=256%2C250&quality=50&strip=all 256w" width="620" height="606" sizes="(max-width: 620px) 100vw, 620px">Linked data types let you insert information, such as a company’s high and low stock prices, that is continually updated. Preston Gralla / Foundry Excel will automatically add a column to the right populated with the latest information for each item you’re tracking, and will keep it updated. You can click the Insert Data button multiple times to keep adding columns to the right for different types of data from the item’s data card.  It’s helpful to add column headers so you know what each column is showing. Make your own custom views of a worksheet Sheet Views let you make a copy of a sheet and then apply filtered or sorted views of the data to the new sheet. It’s useful when you’re working with other people on a spreadsheet, and someone wants to create a customized view without altering the original sheet. You can all create multiple custom-filtered/sorted views for a sheet. Once you’ve saved a sheet view, anyone with access to the spreadsheet can see it. Note: To use this feature, your spreadsheet must be stored in OneDrive. Sheet views work best when your data is in table format. Select the data, then go to the Ribbon toolbar and click the Insert tab. Near the left end of the Insert toolbar, click the Table button and then OK. To create a new sheet view, click the Ribbon’s View tab, then click the New button in the Sheet View area at the far left. The row numbers and column letters at the left and top of your spreadsheet turn black to let you know you’re in a new sheet view. In the Sheet View area of the Ribbon, it says Temporary View, the default name given to a new sheet view before you’ve saved it. Here’s a sheet view with data sorted from highest to lowest costs. Preston Gralla / Foundry Now apply whatever sorting and filtering you like to the data. (If you need help, see the “How to sort and filter data” section of our Excel tables guide.) To save this view, click the Keep button in the Sheet View area of the Ribbon. When you do that, it is saved as “View1” by default. You can click View1 and type in a more meaningful name for the view. When you click Exit on this toolbar, you return to your spreadsheet, and the row numbers and columns on the left and top of the spreadsheet are no longer black. To switch from one sheet view to another, click the View tab. At the left of the Ribbon toolbar, click the down arrow next to the name of the current view (it will say Default if you’re viewing the spreadsheet without a sheet view applied) to open a dropdown list of the sheet views created for the spreadsheet. Click the name of a sheet view to switch to it. Whenever you’re looking at a sheet view, the row numbers and column letters framing your spreadsheet remain black to indicate that you’re in a sheet view, not the original spreadsheet. Create dynamic arrays and charts Dynamic arrays let you write formulas that return multiple values based on your data. When data on the spreadsheet is updated, the dynamic arrays automatically update and resize themselves. To create a dynamic array, first create a table as outlined in the previous tip. Make sure to include a column that lists categories. Also put in at least one column to its right that lists corresponding values. Put a header at the top of each column. So, for example, if you’re creating a spreadsheet for a business trip budget, Column A might list expenses, such as plane tickets, meals, hotel, etc., and Column B could list each item’s cost on the same row. Once you’ve set up the table, use a dynamic array function on it, such as FILTER, SORT, or UNIQUE to create a dynamic array next to the table. Here’s an example of a formula for using the FILTER function: =FILTER(A2:B9, B2:B9 < 2000) This tells Excel to show only the items that cost less than $2,000 in the array. The FILTER function created a data array showing only the items with costs below $2,000. Preston Gralla / Foundry Now, whenever the data in your source table changes, the dynamic array updates and resizes itself to accommodate the changes. That means the dynamic array is always up to date. So in our example, if you add new items with values under $2,000 to the table, the dynamic array will enlarge itself and include those new items. In the same way, you can use the SORT function to sort data and the UNIQUE function to remove duplicate data. (Read about more ways to use the FILTER, SORT, and UNIQUE functions from Microsoft support.) You create a dynamic chart from the dynamic array in the same way you do any other Excel chart. Select the cells from the dynamic array that you want to chart, then select the Insert tab and select the type of chart you want to add. When the source data changes in a way that affects the dynamic array that the chart is based on, both the dynamic array and the chart will be updated. Use AutoSave to provide a safety net as you work If you’re worried that you’ll lose your work on a worksheet because you don’t constantly save it, you’ll welcome the AutoSave feature. It automatically saves your files for you, so you won’t have to worry about system crashes, power outages, Excel crashes and similar problems. It only works only on documents stored in OneDrive, OneDrive for Business, or SharePoint Online. It won’t work with files saved in the older .xls format or files you save to your hard drive. AutoSave is a vast improvement over the previous AutoRecover feature built into Excel. AutoRecover doesn’t save your files in real time; instead, every several minutes it saves an AutoRecover file that you can try to recover after a crash. It doesn’t always work, though — for example, if you don’t properly open Excel after the crash, or if the crash doesn’t meet Microsoft’s definition of a crash. In addition, Microsoft notes, “AutoRecover is only effective for unplanned disruptions, such as a power outage or a crash. AutoRecover files are not designed to be saved when a logoff is scheduled or an orderly shutdown occurs.” And the files aren’t saved in real time, so you’ll likely lose several minutes of work even if all goes as planned. AutoSave is turned on by default in Excel for Microsoft 365 .xlsx workbooks stored in OneDrive, OneDrive for Business, or SharePoint Online. To turn it off (or back on again) for a workbook, use the AutoSave slider on the top left of the screen. If you want AutoSave to be off for all files by default, select File > Options > Save and uncheck the box marked AutoSave files stored in the Cloud by default on Excel. Using AutoSave may require some rethinking of your workflow. Many people are used to creating new worksheets based on existing ones by opening the existing file, making changes to it, and then using Save As to save the new version under a different name, leaving the original file intact. Be warned that doing this with AutoSave enabled will save your changes in the original file. Instead, Microsoft suggests opening the original file and immediately selecting File > Save a Copy (which replaces Save As when AutoSave is enabled) to create a new version. If AutoSave does save unwanted changes to a file, you can always use the Version History feature described below to roll back to an earlier version. Review or restore earlier versions of a spreadsheet There’s an extremely useful feature hiding in the title bar in Excel for Microsoft 365: You can use Version History to go back to previous versions of a file, review them, compare them side-by-side with your existing version, and copy and paste from an older file to your existing one. You can also restore an entire old version. To do it, click the file name at the top of the screen in an open file. A drop-down menu appears. Click Version History, and the Version History pane appears on the right side of the screen with a list of the previous versions of the file, including the time and date they were saved. (Alternatively, you can select the File tab on the Ribbon, click Info from the menu on the left, and then click the Version History button.) Use Version History to see all previous versions of a spreadsheet, copy and paste from an older file to your existing one, or restore an entire old version. Preston Gralla / Foundry In the Version History pane, click Open version under any older version, and that version appears as a read-only version in a new window. Scroll through the version and copy any content you want, then paste it into the latest version of the file. To restore the old version, overwriting the current one, click the Restore button. Try out Microsoft 365 Copilot in Excel — but don’t expect too much For an additional subscription fee, business users of Excel can use Microsoft’s genAI add-in, Microsoft 365 Copilot. You can have Copilot suggest and create charts, create formulas, mine spreadsheets for data insights you might have missed, and more. If you have a Microsoft 365 Personal or Family subscription, many of those features are now bundled with your core subscription. To start using Copilot in Excel, open a spreadsheet and click the Copilot button at the right of the Ribbon’s Home tab. The Copilot panel will appear on the right, offering suggestions for actions it can perform, such as summarizing your data with a chart, adding formulas to the spreadsheet, or applying conditional formatting to the sheet. You can also chat with Copilot in the panel, asking questions about your data or how to perform an action yourself. Note that these suggestions are generic and won’t always make sense. For example, when you start with a blank worksheet and click the Copilot button, its suggestions include summarizing data using pivot tables or charts, even though there’s no data to chart or put into a table. Microsoft 365 Copilot can help you in multiple ways in Excel, including creating formulas and charts, mining spreadsheets for insights, and more. Preston Gralla / Foundry In my testing, I found that Copilot wasn’t particularly helpful. For example, when I asked it to summarize data using a PivotTable or chart, several times it responded, “Something went wrong. Please try again in a moment.” Then it said that I first needed to reformat parts of my spreadsheet by using the Transform() function, and gave confusing advice on how I could do it — it wouldn’t do the task itself. (Eventually, I gave up.) When I asked it to suggest conditional formatting for my spreadsheet, which would highlight important data, it told me which data I should highlight but didn’t explain why the data was important. It also didn’t do the highlighting for me or tell me how to do it. I gave it one more try and asked it to perform an advanced analysis, which it would use Python to do. It certainly did something, although it was unclear what it was. It overwrote my original spreadsheet and added a section that claimed to show annual growth rates for revenue streams. But the data seemed to be incorrect. Perhaps advanced spreadsheet jockeys might be able to make sense of what Copilot is up to whenever they ask it for help. But mere mortal businesspeople may find it of no help at all. In my testing, I found Copilot not at all helpful, although spreadsheet jockeys may be able to make some sense of what it does. Preston Gralla / Foundry What’s more, Microsoft’s focus on Copilot in M365 has reduced the usefulness of Excel in some ways. For example, there used to be a handy feature called Smart Lookup that let you conduct targeted web searches from inside Excel. But at the beginning of 2025, Microsoft removed Smart Lookup from Excel, saying that the feature has been deprecated. Now the only way to search the web from inside Excel is via Copilot, which lacks some features of Smart Lookup — notably the ability to highlight words or phrases in a document and trigger an automatic web search. And M365 Copilot isn’t available to business customers unless they pay the additional subscription fee. Other features to check out Spreadsheet pros will be pleased with several other features and tools that have been added to Excel for Microsoft 365 over the past few years, from a quick data analysis tool to an advanced 3D mapping platform. Get an instant data analysis If you’re looking to analyze data in a spreadsheet, the Quick Analysis tool will help. Highlight the cells you want to analyze, then move your cursor to the lower right-hand corner of what you’ve highlighted. A small icon of a spreadsheet with a lightning bolt on it appears. Click it and you’ll get a variety of tools for performing instant analysis of your data. For example, you can use the tool to highlight the cells with a value greater than a specific number, get the numerical average for the selected cells, or create a chart on the fly. The Quick Analysis feature gives you a variety of tools for analyzing your data instantly. Preston Gralla / Foundry Translate text You can translate text from right within Excel. Highlight the cell whose text you want translated, then select Review > Translate. A Translator pane opens on the right. Excel will detect the words’ language at the top of the pane; you then select the language you want it translated to below. If Excel can’t detect the language of the text you chose or detects it incorrectly, you can override it. Easily find worksheets that have been shared with you It’s easy to forget which worksheets others have shared with you. In Excel for Microsoft 365 there’s an easy way to find them: Select File > Open > Shared with Me to see a list of them all. Note that this only works with OneDrive (both Personal and Business) and SharePoint Online. You’ll also need to be signed into you Microsoft or work or school account. Predict the future with Forecast Sheet Using the Forecast Sheet function, you can generate forecasts built on historical data. If, for example, you have a worksheet showing past book sales by date, Forecast Sheet can predict future sales based on past ones. To use the feature, you must be working in a worksheet that has time-based historical data. Put your cursor in one of the data cells, go to the Data tab on the Ribbon and select Forecast Sheet from the Forecast group toward the right. On the screen that appears, you can select various options such as whether to create a line or bar chart and what date the forecast should end. Click the Create button, and a new worksheet will appear showing your historical and predicted data and the forecast chart. (Your original worksheet will be unchanged.) The Forecast Sheet feature can predict future results based on historical data. Preston Gralla / Foundry Manage data for analysis with Get & Transform This feature is not entirely new to Excel. Formerly known as Power Query, it was made available as a free add-in to Excel 2013 and worked only with the PowerPivot features in Excel Professional Plus. Microsoft’s Power BI business intelligence software offers similar functionality. Now called Get & Transform, it’s a business intelligence tool that lets you pull in, combine, and shape data from wide variety of local and cloud sources. These include Excel workbooks, CSV files, SQL Server and other databases, Azure, Active Directory, and many others. You can also use data from public sources including Wikipedia. Get & Transform helps you pull in and shape data from a wide variety of sources. Preston Gralla / Foundry You’ll find the Get & Transform tools together in a group on the Data tab in the Ribbon. For more about using these tools, see Microsoft’s “Getting Started with Get & Transform in Excel.” Make a 3D map Before Excel 2016, Power Map was a popular free 3D geospatial visualization add-in for Excel. Now it’s free, built into Excel for Microsoft 365, and has been renamed 3D Maps. With it, you can plot geographic and other information on a 3D globe or map. You’ll need to first have data suitable for mapping, and then prepare that data for 3D Maps. Those steps are beyond the scope of this article, but here’s advice from Microsoft about how to get and prepare data for 3D Maps. Once you have properly prepared data, open the spreadsheet and select Insert > 3D Map > Open 3D Maps. Then click Enable from the box that appears. That turns on the 3D Maps feature. For details on how to work with your data and customize your map, head to the Microsoft tutorial “Get started with 3D Maps.” If you don’t have data for mapping but just want to see firsthand what a 3D map is like, you can download sample data created by Microsoft. The screenshot shown here is from Microsoft’s Dallas Utilities Seasonal Electricity Consumption Simulation demo. When you’ve downloaded the workbook, open it up, select Insert > 3D Map > Open 3D Maps and click the map to launch it. With 3D Maps you can plot geospatial data in an interactive 3D map. Preston Gralla / Foundry Automate tasks If you have OneDrive for Business and use Excel with a commercial or educational Microsoft 365 license, you can automate tasks with the Automate tab. You’ll be able to create and edit scripts with the Code Editor, run automated tasks with a button click, and share the script with co-workers. See Microsoft’s “Office Scripts in Excel” documentation for details. Insert data from a picture into Excel There are times you may find data inside an image file that you’d like to get into Excel. Typically, you’ll have to input the data from it manually. There’s now a way to have Excel convert the information on the image into data for a worksheet. In the Get & Transform Data group on the Data tab, click the From Picture dropdown and select Picture From File to choose the image you want to grab data from, or Picture from Clipboard to take a screenshot of an image on your PC and then import the data. For more details, see Microsoft’s “Insert data from picture” support page.   Use keyboard shortcuts Here’s one last productivity tip: If you memorize a handful of keyboard shortcuts for common tasks in Excel, you can save a great deal of time over hunting for the right command to click on. See “Handy Excel keyboard shortcuts for Windows and Mac” for our favorites. This article was originally published in August 2019 and most recently updated in May 2025. More Excel tutorials: Excel basics: Get started with tables Excel basics: Get started with charts and sparklines How to use PivotTables and PivotCharts in Excel How to use slicers in Excel How to use Excel formulas and functions How (and why) to use conditional formatting in Excel How to use Excel macros to save time and automate your work
    0 Σχόλια 0 Μοιράστηκε
  • ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig

    Submitted by WA Contents
    ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig

    Germany Architecture News - May 22, 2025 - 14:57  

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Copenhagen and Hamburg-based architecture office ADEPT has won an international competition to transform a former Karstadt warehouse in a historic area of Braunschweiginto the "Haus der Musik". Called Haus der Musik, the 18,000-square-metre cultural hub will house a new concert hall, a public music school, and other community-oriented programs.The winning project is founded on adaptive reuse principles rather than demolishing the current structure. The old building's architectural rhythm and load-bearing structure are preserved and reactivated. On top of the existing volume is a brand-new, precisely calibrated performance hall, and street level provides direct access to music school activities. From a commercial hub to a cultural hub, the design embodies a daring urban metamorphosis grounded on continuity."The Haus der Musik is a dream project – not just because of its scale, but because it allows us to bring together everything we care about: transformation, sustainability, as well as social and urban social value," said Martin Krogh, founding partner at ADEPT. "This is the largest project in our studio’s history, and undoubtedly one of the most meaningful," Krogh added.The "Third Place"—that vague, largely unplanned area between activities that creates a vast possibility for a new identity emerging from the neighborhood—is the focal point of the transformation. Arrival, music school, and concert hall are all connected by this multi-layered social landscape of performance, instruction, and gathering. Because the music school is integrated into the existing framework, it fosters a vibrant, all-day rhythm of instruction, practice, and casual conversation. Below it, the Klangkeller provides an unpolished and adaptable platform for underground scenes and experimental music. With meticulous consideration for acoustic clarity and spatial intimacy, the new music hall is built as a traditional shoebox typology and is situated in the top levels of the building to retain as much of the old structure as possible. Adjustable ceiling components enable custom tuning based on the performance situation, including organ music and amplified events, while sound-reflective wall and ceiling panels distribute sound uniformly around the room. Both the main floor and the upper balconies provide direct sightlines and engulfing sound to the audience. Rehearsal rooms and backstage areas flank the hall, facilitating a smooth transition between rehearsal and performance. The music hall is further reinforced as a municipal venue by the 270-degree panoramic terrace that encircles the foyer and provides public views of the city."With equal measures of caution and courage, the winning proposal transforms the existing building through adaptive reuse into an important component for Braunschweig's city centre, as well as for the city’s musical landscape," the jury stated."The difficult balancing act between preservation, transformation and innovation has been convincingly achieved." "Even if the interpretation and conceptual reuse may seem surprising at first glance, the contextual integration is comprehensible, sensitive and convincing," the jury added.Site planUrban Presence: A Cultural Link Within the Historic CityThe Haus der Musik acts as an essential urban connection between Altstadtmarkt and Kohlmarkt, two important public squares in Braunschweig, and is located along one of the city's main pedestrian thoroughfares. The project creates a new cultural hub in the urban fabric by reactivating the ground floor with a completely transparent façade and opening up to the city through spacious patios and foyers. It extends beyond its plot to create sightlines, pathways, and gathering spots across the city. The design adds a new public vitality that enhances the old town's civic life while honoring the scale and rhythm of its historic surroundings. Ground floor planThe building's articulated façade and stepped form blend in with Braunschweig's urban profile while quietly indicating its new function as a gathering place for people to enjoy music and social interaction.First floor planUsing a Modern Language to Interpret the PastRedesigning the facade as a reinterpretation of the current building while honoring the historic setting and its distinctive buildings to create a new identity is a crucial architectural gesture. The new facade reworks the original's modular rhythm to create a tactile, sculptured enclosure. Views into the activity within the building are made possible by the dynamic interplay of light and shadow created by the cascading pieces. The ground floor's transparency invites the public in by blurring the lines between the interior and the city.Second floor planWarm timber interiors frame the building's social center, while the structured facade echoes Braunschweig's medieval roofscapes. Materiality is crucial in defining atmosphere and character. In order to preserve important sightlines and blend in with the surrounding urban fabric, the new volume gently recedes from the original cornice lines.Third floor planBuilding on What Already ExistsIn this initiative, sustainability starts with what currently exists. By preserving and reusing the Karstadt building's structural grid and core, demolition and the resulting carbon effect are avoided. With little alteration to the existing foundations, a lightweight music hall made of steel and wood is constructed above. Cross-laminated woodcomponents that are prefabricated enable low-emission and rapid installation.Fifth floor planBy incorporating rooftop photovoltaics and utilizing Braunschweig's low-emission district heating network, the building runs with exceptional energy efficiency. Comfort is maintained while energy consumption is reduced through the use of passive cooling techniques and intelligent ventilation. Demand is further decreased by localized heating systems and water-saving devices.Basement floor planThe result is not merely a monument for music and culture – but a showcase of how architecture can be both ambitious and responsible, rooted in the past and ready for the future.Elevation BrabandstraßeElevation JakobstraßeElevation PoststraßeFacade section existingFacade section ADEPTSection AASection BBSection CCAxonometric drawingConcept, existing as starting pointConcept, community functions as connectorsConcept, concert hall in new constructionConcept, concert hall constructionADEPT and LYTT Architecture completed visitor points reframing largest landscape park in Copenhagen, Denmark. In  addition, ADEPT and Karres en Brands revealed plans for a new masterplan, called WoodHood – Garden City 2.0, in Köln, Germany. Project factsProject name: Haus der MusikArchitect: ADEPTClient: Friedrich Georg Knapp w. Stadt BraunschweigEngineers: Assmann Beraten und Planen, Corall Ingenieure, AvissplanAddress: Poststraße Braunschweig, DESize: 15,000m2 + 3,000 under groundAll images © Aesthetica Studio.All drawings © ADEPT. > via ADEPT
    #adept #selected #transform #former #karstadt
    ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig
    Submitted by WA Contents ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig Germany Architecture News - May 22, 2025 - 14:57   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Copenhagen and Hamburg-based architecture office ADEPT has won an international competition to transform a former Karstadt warehouse in a historic area of Braunschweiginto the "Haus der Musik". Called Haus der Musik, the 18,000-square-metre cultural hub will house a new concert hall, a public music school, and other community-oriented programs.The winning project is founded on adaptive reuse principles rather than demolishing the current structure. The old building's architectural rhythm and load-bearing structure are preserved and reactivated. On top of the existing volume is a brand-new, precisely calibrated performance hall, and street level provides direct access to music school activities. From a commercial hub to a cultural hub, the design embodies a daring urban metamorphosis grounded on continuity."The Haus der Musik is a dream project – not just because of its scale, but because it allows us to bring together everything we care about: transformation, sustainability, as well as social and urban social value," said Martin Krogh, founding partner at ADEPT. "This is the largest project in our studio’s history, and undoubtedly one of the most meaningful," Krogh added.The "Third Place"—that vague, largely unplanned area between activities that creates a vast possibility for a new identity emerging from the neighborhood—is the focal point of the transformation. Arrival, music school, and concert hall are all connected by this multi-layered social landscape of performance, instruction, and gathering. Because the music school is integrated into the existing framework, it fosters a vibrant, all-day rhythm of instruction, practice, and casual conversation. Below it, the Klangkeller provides an unpolished and adaptable platform for underground scenes and experimental music. With meticulous consideration for acoustic clarity and spatial intimacy, the new music hall is built as a traditional shoebox typology and is situated in the top levels of the building to retain as much of the old structure as possible. Adjustable ceiling components enable custom tuning based on the performance situation, including organ music and amplified events, while sound-reflective wall and ceiling panels distribute sound uniformly around the room. Both the main floor and the upper balconies provide direct sightlines and engulfing sound to the audience. Rehearsal rooms and backstage areas flank the hall, facilitating a smooth transition between rehearsal and performance. The music hall is further reinforced as a municipal venue by the 270-degree panoramic terrace that encircles the foyer and provides public views of the city."With equal measures of caution and courage, the winning proposal transforms the existing building through adaptive reuse into an important component for Braunschweig's city centre, as well as for the city’s musical landscape," the jury stated."The difficult balancing act between preservation, transformation and innovation has been convincingly achieved." "Even if the interpretation and conceptual reuse may seem surprising at first glance, the contextual integration is comprehensible, sensitive and convincing," the jury added.Site planUrban Presence: A Cultural Link Within the Historic CityThe Haus der Musik acts as an essential urban connection between Altstadtmarkt and Kohlmarkt, two important public squares in Braunschweig, and is located along one of the city's main pedestrian thoroughfares. The project creates a new cultural hub in the urban fabric by reactivating the ground floor with a completely transparent façade and opening up to the city through spacious patios and foyers. It extends beyond its plot to create sightlines, pathways, and gathering spots across the city. The design adds a new public vitality that enhances the old town's civic life while honoring the scale and rhythm of its historic surroundings. Ground floor planThe building's articulated façade and stepped form blend in with Braunschweig's urban profile while quietly indicating its new function as a gathering place for people to enjoy music and social interaction.First floor planUsing a Modern Language to Interpret the PastRedesigning the facade as a reinterpretation of the current building while honoring the historic setting and its distinctive buildings to create a new identity is a crucial architectural gesture. The new facade reworks the original's modular rhythm to create a tactile, sculptured enclosure. Views into the activity within the building are made possible by the dynamic interplay of light and shadow created by the cascading pieces. The ground floor's transparency invites the public in by blurring the lines between the interior and the city.Second floor planWarm timber interiors frame the building's social center, while the structured facade echoes Braunschweig's medieval roofscapes. Materiality is crucial in defining atmosphere and character. In order to preserve important sightlines and blend in with the surrounding urban fabric, the new volume gently recedes from the original cornice lines.Third floor planBuilding on What Already ExistsIn this initiative, sustainability starts with what currently exists. By preserving and reusing the Karstadt building's structural grid and core, demolition and the resulting carbon effect are avoided. With little alteration to the existing foundations, a lightweight music hall made of steel and wood is constructed above. Cross-laminated woodcomponents that are prefabricated enable low-emission and rapid installation.Fifth floor planBy incorporating rooftop photovoltaics and utilizing Braunschweig's low-emission district heating network, the building runs with exceptional energy efficiency. Comfort is maintained while energy consumption is reduced through the use of passive cooling techniques and intelligent ventilation. Demand is further decreased by localized heating systems and water-saving devices.Basement floor planThe result is not merely a monument for music and culture – but a showcase of how architecture can be both ambitious and responsible, rooted in the past and ready for the future.Elevation BrabandstraßeElevation JakobstraßeElevation PoststraßeFacade section existingFacade section ADEPTSection AASection BBSection CCAxonometric drawingConcept, existing as starting pointConcept, community functions as connectorsConcept, concert hall in new constructionConcept, concert hall constructionADEPT and LYTT Architecture completed visitor points reframing largest landscape park in Copenhagen, Denmark. In  addition, ADEPT and Karres en Brands revealed plans for a new masterplan, called WoodHood – Garden City 2.0, in Köln, Germany. Project factsProject name: Haus der MusikArchitect: ADEPTClient: Friedrich Georg Knapp w. Stadt BraunschweigEngineers: Assmann Beraten und Planen, Corall Ingenieure, AvissplanAddress: Poststraße Braunschweig, DESize: 15,000m2 + 3,000 under groundAll images © Aesthetica Studio.All drawings © ADEPT. > via ADEPT #adept #selected #transform #former #karstadt
    WORLDARCHITECTURE.ORG
    ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig
    Submitted by WA Contents ADEPT selected to transform former Karstadt warehouse into a cultural hub in Braunschweig Germany Architecture News - May 22, 2025 - 14:57   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Copenhagen and Hamburg-based architecture office ADEPT has won an international competition to transform a former Karstadt warehouse in a historic area of Braunschweig (DE) into the "Haus der Musik". Called Haus der Musik, the 18,000-square-metre cultural hub will house a new concert hall, a public music school, and other community-oriented programs.The winning project is founded on adaptive reuse principles rather than demolishing the current structure. The old building's architectural rhythm and load-bearing structure are preserved and reactivated. On top of the existing volume is a brand-new, precisely calibrated performance hall, and street level provides direct access to music school activities. From a commercial hub to a cultural hub, the design embodies a daring urban metamorphosis grounded on continuity."The Haus der Musik is a dream project – not just because of its scale, but because it allows us to bring together everything we care about: transformation, sustainability, as well as social and urban social value," said Martin Krogh, founding partner at ADEPT. "This is the largest project in our studio’s history, and undoubtedly one of the most meaningful," Krogh added.The "Third Place"—that vague, largely unplanned area between activities that creates a vast possibility for a new identity emerging from the neighborhood—is the focal point of the transformation. Arrival, music school, and concert hall are all connected by this multi-layered social landscape of performance, instruction, and gathering. Because the music school is integrated into the existing framework, it fosters a vibrant, all-day rhythm of instruction, practice, and casual conversation. Below it, the Klangkeller provides an unpolished and adaptable platform for underground scenes and experimental music. With meticulous consideration for acoustic clarity and spatial intimacy, the new music hall is built as a traditional shoebox typology and is situated in the top levels of the building to retain as much of the old structure as possible. Adjustable ceiling components enable custom tuning based on the performance situation, including organ music and amplified events, while sound-reflective wall and ceiling panels distribute sound uniformly around the room. Both the main floor and the upper balconies provide direct sightlines and engulfing sound to the audience. Rehearsal rooms and backstage areas flank the hall, facilitating a smooth transition between rehearsal and performance. The music hall is further reinforced as a municipal venue by the 270-degree panoramic terrace that encircles the foyer and provides public views of the city."With equal measures of caution and courage, the winning proposal transforms the existing building through adaptive reuse into an important component for Braunschweig's city centre, as well as for the city’s musical landscape," the jury stated."The difficult balancing act between preservation, transformation and innovation has been convincingly achieved." "Even if the interpretation and conceptual reuse may seem surprising at first glance, the contextual integration is comprehensible, sensitive and convincing," the jury added.Site planUrban Presence: A Cultural Link Within the Historic CityThe Haus der Musik acts as an essential urban connection between Altstadtmarkt and Kohlmarkt, two important public squares in Braunschweig, and is located along one of the city's main pedestrian thoroughfares. The project creates a new cultural hub in the urban fabric by reactivating the ground floor with a completely transparent façade and opening up to the city through spacious patios and foyers. It extends beyond its plot to create sightlines, pathways, and gathering spots across the city. The design adds a new public vitality that enhances the old town's civic life while honoring the scale and rhythm of its historic surroundings. Ground floor planThe building's articulated façade and stepped form blend in with Braunschweig's urban profile while quietly indicating its new function as a gathering place for people to enjoy music and social interaction.First floor planUsing a Modern Language to Interpret the PastRedesigning the facade as a reinterpretation of the current building while honoring the historic setting and its distinctive buildings to create a new identity is a crucial architectural gesture. The new facade reworks the original's modular rhythm to create a tactile, sculptured enclosure. Views into the activity within the building are made possible by the dynamic interplay of light and shadow created by the cascading pieces. The ground floor's transparency invites the public in by blurring the lines between the interior and the city.Second floor planWarm timber interiors frame the building's social center, while the structured facade echoes Braunschweig's medieval roofscapes. Materiality is crucial in defining atmosphere and character. In order to preserve important sightlines and blend in with the surrounding urban fabric, the new volume gently recedes from the original cornice lines.Third floor planBuilding on What Already ExistsIn this initiative, sustainability starts with what currently exists. By preserving and reusing the Karstadt building's structural grid and core, demolition and the resulting carbon effect are avoided. With little alteration to the existing foundations, a lightweight music hall made of steel and wood is constructed above. Cross-laminated wood (CLT) components that are prefabricated enable low-emission and rapid installation.Fifth floor planBy incorporating rooftop photovoltaics and utilizing Braunschweig's low-emission district heating network, the building runs with exceptional energy efficiency. Comfort is maintained while energy consumption is reduced through the use of passive cooling techniques and intelligent ventilation. Demand is further decreased by localized heating systems and water-saving devices.Basement floor planThe result is not merely a monument for music and culture – but a showcase of how architecture can be both ambitious and responsible, rooted in the past and ready for the future.Elevation BrabandstraßeElevation JakobstraßeElevation PoststraßeFacade section existingFacade section ADEPTSection AASection BBSection CCAxonometric drawingConcept, existing as starting pointConcept, community functions as connectorsConcept, concert hall in new constructionConcept, concert hall constructionADEPT and LYTT Architecture completed visitor points reframing largest landscape park in Copenhagen, Denmark. In  addition, ADEPT and Karres en Brands revealed plans for a new masterplan, called WoodHood – Garden City 2.0, in Köln, Germany. Project factsProject name: Haus der MusikArchitect: ADEPTClient: Friedrich Georg Knapp w. Stadt BraunschweigEngineers: Assmann Beraten und Planen, Corall Ingenieure, AvissplanAddress: Poststraße Braunschweig, DESize: 15,000m2 + 3,000 under groundAll images © Aesthetica Studio.All drawings © ADEPT. > via ADEPT
    0 Σχόλια 0 Μοιράστηκε
  • Why we should reconsider the meaning of open spaces 

    Most people think of urban open spaces in terms of grand parks—Chicago’s Millennium Park or New York’s Central Park or San Francisco’s Golden Gate Park. These are our iconic parks—our sublime spaces. They serve as the “lungs” of our cities, and they certainly steal our hearts. These spaces are not locked behind gates but are stages where our own lives play out and memories are created, full of movement and reflection and joy. 

    There are more modest spaces in our cities, though, that are just as important to our lives—the thresholds and courtyards and pocket parks. They’re the places where we bump into our neighbors to walk our dogs or read on a bench in an environment where nature takes over. They are often unheralded like a great Olmsted Park, but always full of potential for true placemaking to begin.  

    My father, Edwin Smith was director of parks and recreation for the City of Eugene, Oregon and he knew this. He served for more than 30 years and was responsible for the design and development of 41 parks and greenways in and around the city. His work had a profound impact on me as a future architect. More to the point, his work and vision quietly enhanced the lives of so many people in the community as their access to parks was interwoven into their lives.  

    Westmoreland Park is one of Eugene’s centerpiece parks and is a great example. Its gentle slopes and lush lawns support stands of mature cedars and redwoods, not to mention Douglas firs, hemlocks, spruces, and the Oregon white oak. Even if you don’t know all those trees by sight, you know Westmoreland Park if you live in Eugene, and you know that it offers something for almost every active resident. I think that’s the importance of a well-designed space—it invites and it responds.  

    Living ribbon of connection 

    Responsiveness is a word worth pausing on for a moment. It’s the entire reason for design—architectural, urban, or otherwise—and it’s one of the hallmarks of placemaking. 

    My firm, MG2, recently envisioned design for an attainable housing project in Irvine, California, that was meant to respond to a specific housing challenge in a rapidly changing part of the state. It isn’t a monolith. It is, instead, what we think of as a living ribbon of connection—a continuous path that links breezeways, community gardens, play areas, and shared courtyards woven throughout the residential units. It is not simply a circulation route. It is a spine, and just like our spines, everything it touches depends upon it for structure. But more importantly, this isn’t just a collection of amenities. It is a social ecosystem. The layout fosters degrees of interaction—private balconies that open into semi-private courtyards, which in turn flow into cooperative gardens and fully public gathering spaces. Residents can choose solitude, casual interaction, or spirited communal activity—each space encouraging a different rhythm of human engagement. Children play while parents share meals. Strangers become neighbors over garden beds. This is architecture as social infrastructure. 

    To reimagine open space is not to think bigger—it is to think deeper. To look between, beneath, beyond. It is to ask: How do we shape space to be responsive? How do we design for encounter, for joy, for the unplanned but meaningful moments of connection? 

    Let us not treat the spaces between buildings as voids. Let us see them as vessels—of life, of community, of possibility. Let us design not just for shelter, but for spirit. Let us reimagine open spaces. 

    Mitch Smith AIA, LEED AP is the CEO and chairman of MG2, an affiliate of Colliers Engineering & Design. 
    #why #should #reconsider #meaning #open
    Why we should reconsider the meaning of open spaces 
    Most people think of urban open spaces in terms of grand parks—Chicago’s Millennium Park or New York’s Central Park or San Francisco’s Golden Gate Park. These are our iconic parks—our sublime spaces. They serve as the “lungs” of our cities, and they certainly steal our hearts. These spaces are not locked behind gates but are stages where our own lives play out and memories are created, full of movement and reflection and joy.  There are more modest spaces in our cities, though, that are just as important to our lives—the thresholds and courtyards and pocket parks. They’re the places where we bump into our neighbors to walk our dogs or read on a bench in an environment where nature takes over. They are often unheralded like a great Olmsted Park, but always full of potential for true placemaking to begin.   My father, Edwin Smith was director of parks and recreation for the City of Eugene, Oregon and he knew this. He served for more than 30 years and was responsible for the design and development of 41 parks and greenways in and around the city. His work had a profound impact on me as a future architect. More to the point, his work and vision quietly enhanced the lives of so many people in the community as their access to parks was interwoven into their lives.   Westmoreland Park is one of Eugene’s centerpiece parks and is a great example. Its gentle slopes and lush lawns support stands of mature cedars and redwoods, not to mention Douglas firs, hemlocks, spruces, and the Oregon white oak. Even if you don’t know all those trees by sight, you know Westmoreland Park if you live in Eugene, and you know that it offers something for almost every active resident. I think that’s the importance of a well-designed space—it invites and it responds.   Living ribbon of connection  Responsiveness is a word worth pausing on for a moment. It’s the entire reason for design—architectural, urban, or otherwise—and it’s one of the hallmarks of placemaking.  My firm, MG2, recently envisioned design for an attainable housing project in Irvine, California, that was meant to respond to a specific housing challenge in a rapidly changing part of the state. It isn’t a monolith. It is, instead, what we think of as a living ribbon of connection—a continuous path that links breezeways, community gardens, play areas, and shared courtyards woven throughout the residential units. It is not simply a circulation route. It is a spine, and just like our spines, everything it touches depends upon it for structure. But more importantly, this isn’t just a collection of amenities. It is a social ecosystem. The layout fosters degrees of interaction—private balconies that open into semi-private courtyards, which in turn flow into cooperative gardens and fully public gathering spaces. Residents can choose solitude, casual interaction, or spirited communal activity—each space encouraging a different rhythm of human engagement. Children play while parents share meals. Strangers become neighbors over garden beds. This is architecture as social infrastructure.  To reimagine open space is not to think bigger—it is to think deeper. To look between, beneath, beyond. It is to ask: How do we shape space to be responsive? How do we design for encounter, for joy, for the unplanned but meaningful moments of connection?  Let us not treat the spaces between buildings as voids. Let us see them as vessels—of life, of community, of possibility. Let us design not just for shelter, but for spirit. Let us reimagine open spaces.  Mitch Smith AIA, LEED AP is the CEO and chairman of MG2, an affiliate of Colliers Engineering & Design.  #why #should #reconsider #meaning #open
    WWW.FASTCOMPANY.COM
    Why we should reconsider the meaning of open spaces 
    Most people think of urban open spaces in terms of grand parks—Chicago’s Millennium Park or New York’s Central Park or San Francisco’s Golden Gate Park. These are our iconic parks—our sublime spaces. They serve as the “lungs” of our cities, and they certainly steal our hearts. These spaces are not locked behind gates but are stages where our own lives play out and memories are created, full of movement and reflection and joy.  There are more modest spaces in our cities, though, that are just as important to our lives—the thresholds and courtyards and pocket parks. They’re the places where we bump into our neighbors to walk our dogs or read on a bench in an environment where nature takes over. They are often unheralded like a great Olmsted Park, but always full of potential for true placemaking to begin.   My father, Edwin Smith was director of parks and recreation for the City of Eugene, Oregon and he knew this. He served for more than 30 years and was responsible for the design and development of 41 parks and greenways in and around the city. His work had a profound impact on me as a future architect. More to the point, his work and vision quietly enhanced the lives of so many people in the community as their access to parks was interwoven into their lives.   Westmoreland Park is one of Eugene’s centerpiece parks and is a great example. Its gentle slopes and lush lawns support stands of mature cedars and redwoods, not to mention Douglas firs, hemlocks, spruces, and the Oregon white oak. Even if you don’t know all those trees by sight, you know Westmoreland Park if you live in Eugene, and you know that it offers something for almost every active resident. I think that’s the importance of a well-designed space—it invites and it responds.   Living ribbon of connection  Responsiveness is a word worth pausing on for a moment. It’s the entire reason for design—architectural, urban, or otherwise—and it’s one of the hallmarks of placemaking.  My firm, MG2, recently envisioned design for an attainable housing project in Irvine, California, that was meant to respond to a specific housing challenge in a rapidly changing part of the state. It isn’t a monolith. It is, instead, what we think of as a living ribbon of connection—a continuous path that links breezeways, community gardens, play areas, and shared courtyards woven throughout the residential units. It is not simply a circulation route. It is a spine, and just like our spines, everything it touches depends upon it for structure. But more importantly, this isn’t just a collection of amenities. It is a social ecosystem. The layout fosters degrees of interaction—private balconies that open into semi-private courtyards, which in turn flow into cooperative gardens and fully public gathering spaces. Residents can choose solitude, casual interaction, or spirited communal activity—each space encouraging a different rhythm of human engagement. Children play while parents share meals. Strangers become neighbors over garden beds. This is architecture as social infrastructure.  To reimagine open space is not to think bigger—it is to think deeper. To look between, beneath, beyond. It is to ask: How do we shape space to be responsive? How do we design for encounter, for joy, for the unplanned but meaningful moments of connection?  Let us not treat the spaces between buildings as voids. Let us see them as vessels—of life, of community, of possibility. Let us design not just for shelter, but for spirit. Let us reimagine open spaces.  Mitch Smith AIA, LEED AP is the CEO and chairman of MG2, an affiliate of Colliers Engineering & Design. 
    0 Σχόλια 0 Μοιράστηκε
  • Projects Update – Q2/2025

    Projects Update – Q2/2025

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands.

    Complete
    Vulkan
    Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend.
    The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems.

    Almost Complete
    UV Sync
    All issues from theUV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch.
    The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817.
    Better integration across node trees
    The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody.
    Compositor Assets Mockup.
    The next step is to expose most of the existing node options as socket inputs,. This will enable compositor node assets to be bundled with Blender.
    After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable.

    In Progress
    Project Setup
    The first milestonewas recently merged. The next step is to handle Path Template errors in a more robust way.
    Project Setup Mockup.
    After that, work will begin on the Project Definition phase. Follow the project at #133001.
    Shape Keys Improvements
    What was originally framed as a performance problem has shifted focus toward usability and management of shape keys.
    As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow.
    Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838.
    Remote Asset Libraries
    The project has broadened in scope to address usability improvements, including:

    Preview generation for all selected assets.
    A more compact view with a horizontal list and two-line names.
    Snapping for dragged collection assets.

    Remote Asset Library mockup.
    Meanwhile, the code for handling downloads has been submitted for review but encountered a setback.
    Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495.
    Hair Dynamics
    The Hair Dynamics project consists of multiple deliverables:

    Embedded Linked Data— still under review
    Bundles and Closures — merged as experimental
    Declarative Systems — published as a design proposal
    Hair Solver — see below

    For the hairsolver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature.
    This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects.

    Design/Prototype
    Texture cache and mipmaps
    Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced.
    These scenes were chosen because they include texture cachefiles. To learn how to test it, follow the project at #68917.
    NPR
    The NPRprototype received extensive feedback, helping to map out all planned and unsupported use cases for the project.
    More details, including the final design and development plans, will be shared soon.
    In brief:

    Some features will be implemented as EEVEE nodes.
    Others will be enabled via per-material/object compositing nodes.

    EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design.
    So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block.
    Story Tools Mockup.
    The next step is to finalize the design by either:

    Exploring once more the idea of keeping the sequence as part of the scene; or
    Settling on a per-camera and file settings design.

    After that, a technical breakdown will follow, then development. Follow the project at #131329.

    Not Started
    Layered Sculpting
    Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes.
    However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly:

    Performance problems with smaller brush strokes.
    Local brush management.

    The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume.
    Dynamic Overrides
    The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process.
    The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process.

    And more…
    Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings.
    All this progress is made possible thanks to donations and ongoing community involvement and contributions.

    Support the Future of Blender
    Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

    ♥ Donate to Blender
    #projects #update #q22025
    Projects Update – Q2/2025
    Projects Update – Q2/2025 html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands. Complete Vulkan Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend. The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems. Almost Complete UV Sync All issues from theUV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch. The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817. Better integration across node trees The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody. Compositor Assets Mockup. The next step is to expose most of the existing node options as socket inputs,. This will enable compositor node assets to be bundled with Blender. After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable. In Progress Project Setup The first milestonewas recently merged. The next step is to handle Path Template errors in a more robust way. Project Setup Mockup. After that, work will begin on the Project Definition phase. Follow the project at #133001. Shape Keys Improvements What was originally framed as a performance problem has shifted focus toward usability and management of shape keys. As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow. Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838. Remote Asset Libraries The project has broadened in scope to address usability improvements, including: Preview generation for all selected assets. A more compact view with a horizontal list and two-line names. Snapping for dragged collection assets. Remote Asset Library mockup. Meanwhile, the code for handling downloads has been submitted for review but encountered a setback. Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495. Hair Dynamics The Hair Dynamics project consists of multiple deliverables: Embedded Linked Data— still under review Bundles and Closures — merged as experimental Declarative Systems — published as a design proposal Hair Solver — see below For the hairsolver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature. This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects. Design/Prototype Texture cache and mipmaps Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced. These scenes were chosen because they include texture cachefiles. To learn how to test it, follow the project at #68917. NPR The NPRprototype received extensive feedback, helping to map out all planned and unsupported use cases for the project. More details, including the final design and development plans, will be shared soon. In brief: Some features will be implemented as EEVEE nodes. Others will be enabled via per-material/object compositing nodes. EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design. So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block. Story Tools Mockup. The next step is to finalize the design by either: Exploring once more the idea of keeping the sequence as part of the scene; or Settling on a per-camera and file settings design. After that, a technical breakdown will follow, then development. Follow the project at #131329. Not Started Layered Sculpting Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes. However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly: Performance problems with smaller brush strokes. Local brush management. The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume. Dynamic Overrides The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process. The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process. And more… Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings. All this progress is made possible thanks to donations and ongoing community involvement and contributions. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender #projects #update #q22025
    CODE.BLENDER.ORG
    Projects Update – Q2/2025
    Projects Update – Q2/2025 html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands. Complete Vulkan Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend. The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems. Almost Complete UV Sync All issues from the (five-year-old!) UV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch (builds are available here). The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817. Better integration across node trees The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody. Compositor Assets Mockup. The next step is to expose most of the existing node options as socket inputs, (#137223). This will enable compositor node assets to be bundled with Blender. After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable (#135223). In Progress Project Setup The first milestone (Blender variables) was recently merged. The next step is to handle Path Template errors in a more robust way. Project Setup Mockup. After that, work will begin on the Project Definition phase. Follow the project at #133001. Shape Keys Improvements What was originally framed as a performance problem has shifted focus toward usability and management of shape keys. As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow. Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838. Remote Asset Libraries The project has broadened in scope to address usability improvements, including: Preview generation for all selected assets. A more compact view with a horizontal list and two-line names. Snapping for dragged collection assets. Remote Asset Library mockup. Meanwhile, the code for handling downloads has been submitted for review but encountered a setback. Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495. Hair Dynamics The Hair Dynamics project consists of multiple deliverables: Embedded Linked Data (#133801) — still under review Bundles and Closures — merged as experimental Declarative Systems — published as a design proposal Hair Solver — see below For the hair (physics) solver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature. This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects. Design/Prototype Texture cache and mipmaps Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced. These scenes were chosen because they include texture cache (.tx) files. To learn how to test it, follow the project at #68917. NPR The NPR (Non-Photo Realism) prototype received extensive feedback, helping to map out all planned and unsupported use cases for the project. More details, including the final design and development plans, will be shared soon. In brief: Some features will be implemented as EEVEE nodes. Others will be enabled via per-material/object compositing nodes. EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design. So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block. Story Tools Mockup. The next step is to finalize the design by either: Exploring once more the idea of keeping the sequence as part of the scene; or Settling on a per-camera and file settings design. After that, a technical breakdown will follow, then development. Follow the project at #131329. Not Started Layered Sculpting Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes. However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly: Performance problems with smaller brush strokes. Local brush management. The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume. Dynamic Overrides The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process. The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process. And more… Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings. All this progress is made possible thanks to donations and ongoing community involvement and contributions. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender
    0 Σχόλια 0 Μοιράστηκε
  • #333;">I took my 81-year-old grandma on an international trip. It was great, but I wish I'd known more about traveling with an older relative.


    Looking back, there are a few mistakes I made while traveling internationally with my grandma.
    Emily Schlorf

    2025-05-13T14:12:01Z


    Save
    Saved

    Read in app


    This story is available exclusively to Business Insider
    subscribers.
    Become an Insider
    and start reading now.
    Have an account?
    In summer 2024, I traveled with my grandma, mom, and sister to Montreal.
    I wish I'd thought more about my grandma's physical needs when planning the itinerary.
    It would've been nice to have more downtime in our schedule, too.
    Despite living 1,800 miles apart, my 81-year-old grandma and I have always been close.
    We share a love for "Downton Abbey," cross-stitch, and strong coffee, and I couldn't imagine spending weeks in the summer anywhere but her sunny kitchen table in central Minnesota.Of course, I'd be naive to assume my time with her is unlimited.
    That's one reason my grandma, mom, sister, and I decided to embark on a trip to Montreal together last summer.Although I'm grateful we were able to take this trip, it could have gone a lot smoother had I known these three things about traveling with an older relative.
    The itinerary should have reflected everyone's physical needs, not just my own
    I should've considered how long it would take my grandma to get to excursions like our afternoon tea.



    Emily Schlorf


    I'm the most frequent traveler in my family, so I took on all the planning myself and approached the task the same way I do for solo travel: leaving no stone unturned.I thought my grandma would be well-prepared for the long days, given that she walks 3 miles a day and eats a far more balanced diet than I do.What I failed to consider, though, was how difficult it would be for her to walk on the uneven cobblestone streets.
    On our first day in the city, we nearly missed an afternoon tea reservation since I didn't factor in the slower pace we'd have to take to accommodate my grandma's careful steps.I also didn't realize just how exhausting a full-day Three Pines tour would be.
    Although fantastic — with stops at a monastery, local museum, and five-star resort for lunch — our visit to the villages that inspired the fictional location of my grandma's favorite mystery series was nine hours long.
    My family and I went on a nine-hour tour of Three Pines.



    Emily Schlorf


    As the day progressed, we took turns snoozing in the back seat of our tour guide's van.
    Upon arriving back at the bed and breakfast, my grandma exclaimed how long of a day it was; and I didn't disagree.Similarly, I didn't consider my grandma's physical limitations when choosing restaurants.
    Although they weren't lacking in ambiance — picture patios swallowed in bougainvillea and cool, brutalist interiors overlooking Lake Saint Louis — the dim lighting and small font sizes made it challenging for her to read the menu.My mom, sister, and I mitigated my grandma's vision issues by taking turns reading the menu aloud, line by line, but that got old fast.In retrospect, I wish I'd shown up equipped with solutions, such as finding the menu online so she could zoom in on my phone or reminding her to bring her readers, to improve everyone's dining experience.
    A long trip means extended time away from routinesEveryone gets to a point on vacation when they're ready to return home, but I would argue that the feeling is stronger for older adults like my grandma, who travel once or twice a year and may be used to a strict daily routine.Although my grandma never expressed this feeling to me outright, I noticed as the days went on, she became less game for her granddaughters' plans.For example, on our last evening, my sister and I wanted to check out the shops lining Saint-Laurent Boulevard, but my grandma preferred to have takeout in the hotel.We compromised, and my sister and I walked to the boulevard to pick up dinner, but we ditched our shopping plan since we felt bad keeping my mom and grandma waiting.I wish we'd had more downtime together
    One of my favorite memories from the trip was when we spontaneously visited a speakeasy.



    Emily Schlorf


    Instead of jam-packing every day with new experiences, I wish I'd taken my foot off the gas as the trip progressed — for my grandma's sake as well as my own.As we reached days five and six of the trip, my excitement for the activities I planned dwindled, and I found myself wishing I hadn't planned them at all.Besides, the memories I cherish most from the trip weren't the museums or guided tours, they were the unplanned ones: a shared bottle of wine with our bed and breakfast hosts, a visit to an outdoor antique market, and a nightcap at a speakeasy.Despite the challenges, I'd love to travel with my grandma again
    I would love to go on another trip with my grandma.



    Emily Schlorf


    To anyone contemplating a multigenerational trip, I say do it, but be more considerate than I was.
    Take time to plan the trip together, think of everyone's needs, and be content with slowing down.Strolling through the city hand-in-hand with my grandma, I learned that it's OK to leave some stones unturned, because the real joy comes from who you're turning them with.
    Recommended video

    #666;">المصدر: https://www.businessinsider.com/first-time-international-travel-older-family-member-mistakes-lessons-2025-5" style="color: #0066cc; text-decoration: none;">www.businessinsider.com
    #0066cc;">#took #81yearold #grandma #international #trip #was #great #but #wish #i039d #known #more #about #traveling #with #older #relative #looking #back #there #are #few #mistakes #made #while #internationally #emily #schlorf #20250513t141201z #savesaved #read #app #this #story #available #exclusively #business #insider #subscribersbecome #and #start #reading #nowhave #account #summer #traveled #mom #sister #montreali #thought #grandma039s #physical #needs #when #planning #the #itineraryit #would039ve #been #nice #have #downtime #our #schedule #toodespite #living #miles #apart #always #closewe #share #love #for #quotdownton #abbeyquot #crossstitch #strong #coffee #couldn039t #imagine #spending #weeks #anywhere #her #sunny #kitchen #table #central #minnesotaof #course #naive #assume #time #unlimitedthat039s #one #reason #decided #embark #montreal #together #last #summeralthough #i039m #grateful #were #able #take #could #gone #lot #smoother #had #these #three #things #relativethe #itinerary #should #reflected #everyone039s #not #just #own #should039ve #considered #how #long #would #get #excursions #like #afternoon #tea #most #frequent #traveler #family #all #myself #approached #task #same #way #solo #travel #leaving #stone #unturnedi #wellprepared #days #given #that #she #walks #day #eats #far #balanced #diet #than #dowhat #failed #consider #though #difficult #walk #uneven #cobblestone #streetson #first #city #nearly #missed #reservation #since #didn039t #factor #slower #pace #we039d #accommodate #careful #stepsi #also #realize #exhausting #fullday #pines #tour #bealthough #fantastic #stops #monastery #local #museum #fivestar #resort #lunch #visit #villages #inspired #fictional #location #favorite #mystery #series #nine #hours #went #ninehour #progressed #turns #snoozing #seat #guide039s #vanupon #arriving #bed #breakfast #exclaimed #disagreesimilarly #limitations #choosing #restaurantsalthough #they #weren039t #lacking #ambiance #picture #patios #swallowed #bougainvillea #cool #brutalist #interiors #overlooking #lake #saint #louis #dim #lighting #small #font #sizes #challenging #menumy #mitigated #vision #issues #taking #menu #aloud #line #got #old #fastin #retrospect #shown #equipped #solutions #such #finding #online #zoom #phone #reminding #bring #readers #improve #dining #experiencea #means #extended #away #from #routineseveryone #gets #point #vacation #they039re #ready #return #home #argue #feeling #stronger #adults #who #once #twice #year #may #used #strict #daily #routinealthough #never #expressed #outright #noticed #became #less #game #granddaughters039 #plansfor #example #evening #wanted #check #out #shops #lining #saintlaurent #boulevard #preferred #takeout #hotelwe #compromised #walked #pick #dinner #ditched #shopping #plan #felt #bad #keeping #waitingi #memories #spontaneously #visited #speakeasy #instead #jampacking #every #new #experiences #taken #foot #off #gas #sake #well #ownas #reached #five #six #excitement #activities #planned #dwindled #found #wishing #hadn039t #them #allbesides #cherish #museums #guided #tours #unplanned #ones #shared #bottle #wine #hosts #outdoor #antique #market #nightcap #speakeasydespite #challenges #again #another #anyone #contemplating #multigenerational #say #considerate #wastake #think #content #slowing #downstrolling #through #handinhand #learned #it039s #leave #some #stones #unturned #because #real #joy #comes #you039re #turning #withrecommended #video
    I took my 81-year-old grandma on an international trip. It was great, but I wish I'd known more about traveling with an older relative.
    Looking back, there are a few mistakes I made while traveling internationally with my grandma. Emily Schlorf 2025-05-13T14:12:01Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? In summer 2024, I traveled with my grandma, mom, and sister to Montreal. I wish I'd thought more about my grandma's physical needs when planning the itinerary. It would've been nice to have more downtime in our schedule, too. Despite living 1,800 miles apart, my 81-year-old grandma and I have always been close. We share a love for "Downton Abbey," cross-stitch, and strong coffee, and I couldn't imagine spending weeks in the summer anywhere but her sunny kitchen table in central Minnesota.Of course, I'd be naive to assume my time with her is unlimited. That's one reason my grandma, mom, sister, and I decided to embark on a trip to Montreal together last summer.Although I'm grateful we were able to take this trip, it could have gone a lot smoother had I known these three things about traveling with an older relative. The itinerary should have reflected everyone's physical needs, not just my own I should've considered how long it would take my grandma to get to excursions like our afternoon tea. Emily Schlorf I'm the most frequent traveler in my family, so I took on all the planning myself and approached the task the same way I do for solo travel: leaving no stone unturned.I thought my grandma would be well-prepared for the long days, given that she walks 3 miles a day and eats a far more balanced diet than I do.What I failed to consider, though, was how difficult it would be for her to walk on the uneven cobblestone streets. On our first day in the city, we nearly missed an afternoon tea reservation since I didn't factor in the slower pace we'd have to take to accommodate my grandma's careful steps.I also didn't realize just how exhausting a full-day Three Pines tour would be. Although fantastic — with stops at a monastery, local museum, and five-star resort for lunch — our visit to the villages that inspired the fictional location of my grandma's favorite mystery series was nine hours long. My family and I went on a nine-hour tour of Three Pines. Emily Schlorf As the day progressed, we took turns snoozing in the back seat of our tour guide's van. Upon arriving back at the bed and breakfast, my grandma exclaimed how long of a day it was; and I didn't disagree.Similarly, I didn't consider my grandma's physical limitations when choosing restaurants. Although they weren't lacking in ambiance — picture patios swallowed in bougainvillea and cool, brutalist interiors overlooking Lake Saint Louis — the dim lighting and small font sizes made it challenging for her to read the menu.My mom, sister, and I mitigated my grandma's vision issues by taking turns reading the menu aloud, line by line, but that got old fast.In retrospect, I wish I'd shown up equipped with solutions, such as finding the menu online so she could zoom in on my phone or reminding her to bring her readers, to improve everyone's dining experience. A long trip means extended time away from routinesEveryone gets to a point on vacation when they're ready to return home, but I would argue that the feeling is stronger for older adults like my grandma, who travel once or twice a year and may be used to a strict daily routine.Although my grandma never expressed this feeling to me outright, I noticed as the days went on, she became less game for her granddaughters' plans.For example, on our last evening, my sister and I wanted to check out the shops lining Saint-Laurent Boulevard, but my grandma preferred to have takeout in the hotel.We compromised, and my sister and I walked to the boulevard to pick up dinner, but we ditched our shopping plan since we felt bad keeping my mom and grandma waiting.I wish we'd had more downtime together One of my favorite memories from the trip was when we spontaneously visited a speakeasy. Emily Schlorf Instead of jam-packing every day with new experiences, I wish I'd taken my foot off the gas as the trip progressed — for my grandma's sake as well as my own.As we reached days five and six of the trip, my excitement for the activities I planned dwindled, and I found myself wishing I hadn't planned them at all.Besides, the memories I cherish most from the trip weren't the museums or guided tours, they were the unplanned ones: a shared bottle of wine with our bed and breakfast hosts, a visit to an outdoor antique market, and a nightcap at a speakeasy.Despite the challenges, I'd love to travel with my grandma again I would love to go on another trip with my grandma. Emily Schlorf To anyone contemplating a multigenerational trip, I say do it, but be more considerate than I was. Take time to plan the trip together, think of everyone's needs, and be content with slowing down.Strolling through the city hand-in-hand with my grandma, I learned that it's OK to leave some stones unturned, because the real joy comes from who you're turning them with. Recommended video
    #took #81yearold #grandma #international #trip #was #great #but #wish #i039d #known #more #about #traveling #with #older #relative #looking #back #there #are #few #mistakes #made #while #internationally #emily #schlorf #20250513t141201z #savesaved #read #app #this #story #available #exclusively #business #insider #subscribersbecome #and #start #reading #nowhave #account #summer #traveled #mom #sister #montreali #thought #grandma039s #physical #needs #when #planning #the #itineraryit #would039ve #been #nice #have #downtime #our #schedule #toodespite #living #miles #apart #always #closewe #share #love #for #quotdownton #abbeyquot #crossstitch #strong #coffee #couldn039t #imagine #spending #weeks #anywhere #her #sunny #kitchen #table #central #minnesotaof #course #naive #assume #time #unlimitedthat039s #one #reason #decided #embark #montreal #together #last #summeralthough #i039m #grateful #were #able #take #could #gone #lot #smoother #had #these #three #things #relativethe #itinerary #should #reflected #everyone039s #not #just #own #should039ve #considered #how #long #would #get #excursions #like #afternoon #tea #most #frequent #traveler #family #all #myself #approached #task #same #way #solo #travel #leaving #stone #unturnedi #wellprepared #days #given #that #she #walks #day #eats #far #balanced #diet #than #dowhat #failed #consider #though #difficult #walk #uneven #cobblestone #streetson #first #city #nearly #missed #reservation #since #didn039t #factor #slower #pace #we039d #accommodate #careful #stepsi #also #realize #exhausting #fullday #pines #tour #bealthough #fantastic #stops #monastery #local #museum #fivestar #resort #lunch #visit #villages #inspired #fictional #location #favorite #mystery #series #nine #hours #went #ninehour #progressed #turns #snoozing #seat #guide039s #vanupon #arriving #bed #breakfast #exclaimed #disagreesimilarly #limitations #choosing #restaurantsalthough #they #weren039t #lacking #ambiance #picture #patios #swallowed #bougainvillea #cool #brutalist #interiors #overlooking #lake #saint #louis #dim #lighting #small #font #sizes #challenging #menumy #mitigated #vision #issues #taking #menu #aloud #line #got #old #fastin #retrospect #shown #equipped #solutions #such #finding #online #zoom #phone #reminding #bring #readers #improve #dining #experiencea #means #extended #away #from #routineseveryone #gets #point #vacation #they039re #ready #return #home #argue #feeling #stronger #adults #who #once #twice #year #may #used #strict #daily #routinealthough #never #expressed #outright #noticed #became #less #game #granddaughters039 #plansfor #example #evening #wanted #check #out #shops #lining #saintlaurent #boulevard #preferred #takeout #hotelwe #compromised #walked #pick #dinner #ditched #shopping #plan #felt #bad #keeping #waitingi #memories #spontaneously #visited #speakeasy #instead #jampacking #every #new #experiences #taken #foot #off #gas #sake #well #ownas #reached #five #six #excitement #activities #planned #dwindled #found #wishing #hadn039t #them #allbesides #cherish #museums #guided #tours #unplanned #ones #shared #bottle #wine #hosts #outdoor #antique #market #nightcap #speakeasydespite #challenges #again #another #anyone #contemplating #multigenerational #say #considerate #wastake #think #content #slowing #downstrolling #through #handinhand #learned #it039s #leave #some #stones #unturned #because #real #joy #comes #you039re #turning #withrecommended #video
    WWW.BUSINESSINSIDER.COM
    I took my 81-year-old grandma on an international trip. It was great, but I wish I'd known more about traveling with an older relative.
    Looking back, there are a few mistakes I made while traveling internationally with my grandma. Emily Schlorf 2025-05-13T14:12:01Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? In summer 2024, I traveled with my grandma, mom, and sister to Montreal. I wish I'd thought more about my grandma's physical needs when planning the itinerary. It would've been nice to have more downtime in our schedule, too. Despite living 1,800 miles apart, my 81-year-old grandma and I have always been close. We share a love for "Downton Abbey," cross-stitch, and strong coffee, and I couldn't imagine spending weeks in the summer anywhere but her sunny kitchen table in central Minnesota.Of course, I'd be naive to assume my time with her is unlimited. That's one reason my grandma, mom, sister, and I decided to embark on a trip to Montreal together last summer.Although I'm grateful we were able to take this trip, it could have gone a lot smoother had I known these three things about traveling with an older relative. The itinerary should have reflected everyone's physical needs, not just my own I should've considered how long it would take my grandma to get to excursions like our afternoon tea. Emily Schlorf I'm the most frequent traveler in my family, so I took on all the planning myself and approached the task the same way I do for solo travel: leaving no stone unturned.I thought my grandma would be well-prepared for the long days, given that she walks 3 miles a day and eats a far more balanced diet than I do.What I failed to consider, though, was how difficult it would be for her to walk on the uneven cobblestone streets. On our first day in the city, we nearly missed an afternoon tea reservation since I didn't factor in the slower pace we'd have to take to accommodate my grandma's careful steps.I also didn't realize just how exhausting a full-day Three Pines tour would be. Although fantastic — with stops at a monastery, local museum, and five-star resort for lunch — our visit to the villages that inspired the fictional location of my grandma's favorite mystery series was nine hours long. My family and I went on a nine-hour tour of Three Pines. Emily Schlorf As the day progressed, we took turns snoozing in the back seat of our tour guide's van. Upon arriving back at the bed and breakfast, my grandma exclaimed how long of a day it was; and I didn't disagree.Similarly, I didn't consider my grandma's physical limitations when choosing restaurants. Although they weren't lacking in ambiance — picture patios swallowed in bougainvillea and cool, brutalist interiors overlooking Lake Saint Louis — the dim lighting and small font sizes made it challenging for her to read the menu.My mom, sister, and I mitigated my grandma's vision issues by taking turns reading the menu aloud, line by line, but that got old fast.In retrospect, I wish I'd shown up equipped with solutions, such as finding the menu online so she could zoom in on my phone or reminding her to bring her readers, to improve everyone's dining experience. A long trip means extended time away from routinesEveryone gets to a point on vacation when they're ready to return home, but I would argue that the feeling is stronger for older adults like my grandma, who travel once or twice a year and may be used to a strict daily routine.Although my grandma never expressed this feeling to me outright, I noticed as the days went on, she became less game for her granddaughters' plans.For example, on our last evening, my sister and I wanted to check out the shops lining Saint-Laurent Boulevard, but my grandma preferred to have takeout in the hotel.We compromised, and my sister and I walked to the boulevard to pick up dinner, but we ditched our shopping plan since we felt bad keeping my mom and grandma waiting.I wish we'd had more downtime together One of my favorite memories from the trip was when we spontaneously visited a speakeasy. Emily Schlorf Instead of jam-packing every day with new experiences, I wish I'd taken my foot off the gas as the trip progressed — for my grandma's sake as well as my own.As we reached days five and six of the trip, my excitement for the activities I planned dwindled, and I found myself wishing I hadn't planned them at all.Besides, the memories I cherish most from the trip weren't the museums or guided tours, they were the unplanned ones: a shared bottle of wine with our bed and breakfast hosts, a visit to an outdoor antique market, and a nightcap at a speakeasy.Despite the challenges, I'd love to travel with my grandma again I would love to go on another trip with my grandma. Emily Schlorf To anyone contemplating a multigenerational trip, I say do it, but be more considerate than I was. Take time to plan the trip together, think of everyone's needs, and be content with slowing down.Strolling through the city hand-in-hand with my grandma, I learned that it's OK to leave some stones unturned, because the real joy comes from who you're turning them with. Recommended video
    0 Σχόλια 0 Μοιράστηκε
  • GPU Architecture & Working intuitively explained


    Author(s): Allohvk

    Originally published on Towards AI.

    GPU Origins
    The image displayed on a computer screen is made up of millions of tiny pixels. In early days, “graphics controllers” were given instructions by the CPU on how to calculate the individual pixel values so that the appropriate image could be displayed. These were ok for conventional displays but for a really good gaming experience, images need to be built dozens of times per second. The CPU was not really designed to handle these kind of loads.
    The whole process of creating the image could be parallelized big-time simply by (a) dividing the image into smaller blocks (b) carrying out computations for each block in parallel & (c) grouping them back again. The results of one block don’t influence the results of the other blocks. CPU’s multi-threading capabilities was not really conceived for such massive parallelization. Enter the GPU! Sony first used the term GPU in 1994, in its PlayStation consoles. The technology was perfected by NVIDIA which soon became a leader.
    GPUs have numerous computation cores (much more than a CPU) and gaming programmers could write Shaders — programs to run graphics computations on the GPU in a massively parallelized way to create the screen images in super-fast time. The GPU is inspired by the CPU but was specifically designed to enable massive multi-threaded operations on its numerous computation cores seamlessly. Creating threads, switching between threads etc is much faster on a GPU. Some smart developers also realized that these parallel processing capabilities could be used for other computationally intensive tasks as well!

    2005: Steinkrau implements a simple 2-layer Neural Net on a GPU
    2006: Kumar et. al. trains a CNN model for document processing
    2007: NVIDIA released Compute Unified Device Architecture (CUDA) — a custom language extending C to exploit data parallelism on GPUs. Now developers had much more granular control over the image rendering.
    2008 a landmark paper by Raina et al was released. This paper pretty much showed everyone how to train deep layers on a GPU
    2014: NVIDIA released CuDNN — a dedicated CUDA library for Deep Learning. Very soon PyTorch, TensorFlow etc incorporated CuDNN, setting the stage for modern GPU usage for AI!

    A GPU is an ASIC or Application-Specific Integrated Circuit having a processor (hosting numerous computational cores), a memory soldered onto it (we want to avoid going to the CPU RAM for everything), a cooling system (well, they heat up pretty fast) and a BIOS chip (same role as a CPU — to store settings, run startup diagnostics etc). This card is then plugged into the motherboard slot using the PCI Express interface. The terms GPU and graphics card are often used interchangeably. Some GPUs like the one in Apple M3 do not have a dedicated memory but instead use the system RAM itself which is possible due to its unique design. Google has the TPU (Tensor Processing Unit) which is its own ASIC. We discuss the GPU memory, the processing cores, the LLM workflows happening inside them & common topologies for clustering.
    Photo by Thomas Foster on Unsplash
    1. GPU Memory module — The VRAM
    Instead of having the GPU talk to the regular RAM, it made sense to create another RAM physically closer to the GPU die so that data retrieval is faster. So a graphics card has a memory called VRAM — Video Random Access Memory in addition to the computation engines . VRAM is connected to the computation engine cores via a Bus called the memory interface.
    1.1 What is DRAM?
    Let us talk first of RAM technology in general. All memory whether it is the CPU RAM or the GPU VRAM are mostly based on DRAM technology which consists of a capacitor and a transistor. The capacitor’s charge represents the data stored. Due to its very nature, this charge gradually leaks. To prevent data loss, a refresh circuit periodically rewrites the data back, restoring its charge. Hence the name — Dynamic RAM due to these preiodic refreshes.
    Most computers use Synchronous DDR5 DRAM’s as their CPU RAMs. Synchronous because it utilizes the system clock for better performance. In other words the action (of retrieving & storing data) is operationally coordinated by an external clock signal. Tying the operations to the clock makes it faster. The processor knows the exact timing & number of cycles in which the data will be available from the RAM to the bus & can plan better. We have DDR1 (1st Gen Double Data Rate Synchronous Dynamic RAM released in 2000) to DDR5 which is the choice of CPU RAM as of today.
    1.2 What is SGRAM?
    Let us now talk about the VRAMs in GPUs. The VRAM is a type of SGRAM — Synchronous Graphics RAM. The current generation of VRAMs being used is GDDR6. Yes, this is 6th generation GDDR, the G standing for “Graphics”. While DDR & GDDR share common origins and early couple of generations were similar, the branches separated after DDR3. So as of 2025, DDR5 rules in CPU RAM and GDDR6 rules for consumer-grade GPU RAMs.
    Conceptually DDRs and GDDRs are similar but note that DDRs are used by CPUs which need low latency whereas GDDRs are used by GPUs which are OK to compromise latency for extremely high throughput. Crudely, the former has more frequent smaller calculations & the latter deals with much higher volume of data & some delays are forgiven considering the vast volumes of data being processed. Even more crudely, the former is a bullet train with 6–8 coaches while the latter a 3 Kilometre long goods train.
    1.3 GDDR VRAMs explained in detail
    GDDR memory are individual chips soldered to the PCB (Printed Circuit Board) very close to the GPU die. The physical proximity improves the speed of data transfer from the VRAM to the GPU processor. There are pins in a GDDR which can be thought of as individual wires that connect it to the processor. Bus width is literally the number of such connections. GDDR6 has 32 pins spread across 2 channels with roughly 16 Gbits.p.s bandwidth per pin. Bandwidth is total amount of data being moved & if you had one single metric at your disposal to take a decision, it would be this. Before we go further, let us try to understand this metric intuitively.
    1.4 Calculating GPU Memory Bandwidth intuitively
    Memory Bandwidth is the max rate at which data can be transferred between the GPU and the VRAM. We discussed that data transmission is synchronized with the clock. The clock cycle is measured in hertz & represents the number of cycles per second. Let us say we have a clock operating at 1000 MHz. This literally means 1 billion clock ticks per second. How long does a tick last? Literally 1/(1 billion) i.e. 1 nano second. Data is sent to and fro every clock cycle. So every nano-second, a bus-full of data is sent from the VRAM to the processor & vice versa.
    How many seats on the bus? Well, we discussed this earlier… This is the memory interface or the bus width… literally the physical count of bits that fit into the bus. A 128-bit bus would ferry 128 bits every nano-second. The D in G’D’DR6 stands for Double. Basically, data is transmitted on both the rising and falling edges of the clock cycle, so 256 bits every nano-second. How many bytes in 1 sec? 256/8 i.e. 32 billion bytes per second or better still 32 GB/s as Giga is the preferred term when measuring data. The capital B denotes bytes whereas the small b denotes bits… a source of confusion.
    A more practical formula is: Bandwidth = Clock * Bus Width x Data Rate, where the Data Rate is the number of data transfers per cycle. GDDR6 is Double Data Rate (as just discussed) and Quad pumped, which quadruples the (doubled) speed. So effectively the Data Rate is 8. Sometimes, you may encounter the same information crouched in different semantics. E.g., if frequency of command clock (CK#) is N, then the write command clock (WK#) is 2N. GDDR6 rates then are QDR (quad data rate) in reference to WK# and ODR (Octal Data Rate) in reference to the CK#.
    Some OEMs multiply the clock speed & data rate & call it a clock rate or something. In that case, the bandwidth is simply that number multiplied by the bus width. In general, this raw formula can be used: num_of_transfers per second * num_of_bits per transfer / 8. “Boost clock” mechanism allows the GPU and GDDR memory to operate at even higher speeds than the default clock when conditions allow it. Boost clock metric refers to the max such operating clock speed. A 1750 MHz clock means:

    1.75GHz is the frequency of command clock(CK#).
    The frequency of the write clock (WK#) is 3.5GHz due to the G”D”DR
    The Quad pumping takes it to 3.5*4=14 G bits moved in 1 second from each pin on the bus.
    We could have bus widths of up to 384 bits! So we get a bandwidth of 14*384 Giga bits per second.
    Divide by 8 to get 672 GB/s. GDDR6 bandwidth can go upto 1 TB/s. Wow!

    1.5 What is HBM VRAM in a GPU?
    When reading or writing data, contention is created when the VRAM has occupied memory channels & is busy receiving or delivering other data. This contention creates latency & this affects bandwidth. Increasing the number of memory channels is a great option. A type of memory called HBM (High-Bandwidth Memory) has lower access latency than GDDR6, since it has 8-memory channels versus 2 channels in GDDR6. HBM also has a wider bus.
    HBM has 1024 pins spread across 8 channels of 128 pins with roughly 2 Gbits.p.s bandwidth per pin. Compare this with (an equivalent) GDDR which has 32 pins spread across 2 channels with roughly 16 Gbits. p.s bandwidth per pin. Notice how HBM keeps the Gbit/sec per pin much lower than GDDR. This saves power (which is important as we shall see). In spite of this, it has higher bandwidth than GDDR6 due to the wider bus & higher channels.
    As we discussed, a pin is literally a wire connecting the VRAM to the processor. Having 1024 wires connected from the processor to the VRAM is not possible on a standard PCB. Therefore, an “interposer” is used as an
    intermediary to connect the VRAM & the processor. Just like a regular IC, wires (connections) are etched in this silicon “interposer” in the desired quantity. After this, the HBM device(s) & the processor are mounted atop this “interposer”. The slightly twisted workaround is called a 2.5D architecture.Another difference is that while GDDR chips are soldered to the PCB surrounding the GPU die, an HBM structure is a vertical stack of DRAMs like a high rise building. The stacked memory dies are linked using microscopic wires with TSV (Through-Silicon Vias) which are vertical electrical connections giving super fast connectivity between the DRAMs. There are huge challenges to stacking items vertically especially around designing heat sinks & managing thermal safety but somehow HBM manufacturers have made this happen.
    HBM has become a gold standard today for AI data centers. It was introduced to the Market by SK Hynix in 2013. Today, we have the 3rd generation HBM3 and their main client is Nvidia. Due to investments made way back, SK Hynix is leading the pack along with Samsung and a relatively recent entrant named Micron. We hear a lot about chips and TSMC but HBM is a key technology to watch out for in the coming years. We typically have more than one HBM devices inside the GPU die.
    GDDR6 co-exists with HBM3. The markets are complementary. The former addresses PCs & other consumer GPUs whereas the latter addresses data center GPUs. Ultra large scale AI deployments like ChatGPT likely leverage the use of a cluster of NVIDIA GPUs working in tandem. Connecting such GPU’s involves the use of NVIDIA NVLink technology which requires fast GPU memory bandwidth speeds and it’s the reason why HBM is prevalent in such systems. If not for the wide bus width and fast data transfer rates offered by HBM, these kind of clusters would be very difficult to design.
    Besides the VRAM, GPUs also include high-speed memory caches that are even closer to the GPU’s processing cores. There is a physical limit to the sizes of these caches. An L1 cache is usually in KB and an L2 cache is usually a few MB. Different hardware & software strategies exist to keep the most useful, and most reused data present in caches.
    2. Cooling Mechanisms in a GPU
    Higher clock speeds generally result in increased heat generation necessitating the need for cooling solutions to maintain optimal operating temperatures. Usual cooling methods are:

    Passive Cooling: These do not have any powered moving components. They take advantage of optimized airflow to take heat away.
    Fans are used to dissipate heat by blowing cool air across the heat sinks, which are metal components designed to absorb & disperse heat
    In water cooling, water is circulated through the GPU surface using pipes & a radiator. The hot liquid running through the pipes is in turn cooled down by the radiator fan.
    Hybrid cooling — which uses a combination of the above

    3. GPU Computation cores — Processors
    Let us now talk about the processors on the GPU. Unlike CPUs which contain only a few cores, the GPU literally has 1000’s of cores & specializes in running tasks in parallel across these cores using SIMD (Single Instruction, Multiple Data) units. Let us stick to NVIDIA terminology. There are multiple processing units called Streaming Multiprocessor (SM) on a NVIDIA GPU. For e.g. an H100 has upto 144 SMs. What is inside an SM? Well there are mainly 2 type of execution units — CUDA cores & Tensor cores. There is also a small memory SRAM which is Shared between all threads running in that SM. More specifically, every SM has a few KB memory that is partitioned between L1 cache & Shared Memory usage.
    3.1 CUDA core versus Tensor core in a GPU — The difference
    Tensor cores are a pretty recent innovation (from V100 onwards) and are specifically designed for faster matrix multiplication. Let us discuss CUDA cores first. These are the computation engines for regular math operations. Each CUDA core can execute one operation per clock cycle. But their strength lies in parallel processing. Many CUDA cores working together can accelerate computation by executing processes in parallel.
    Tensor Cores are specialized hardware units designed to accelerate “mixed precision” training. The earliest version allowed 4×4 FP16 matrices to be multiplied & added to an FP32 output matrix. By using lower-precision FP16 inputs in the computations, the calculations are vastly accelarated & by retaining FP32 outputs for the rest of the procedure, accuracy is not compromised too much. Modern tensor cores use even lower precision formats in DL computations. See this for more details. There may also specialized units like the transformer engine designed to accelerate models built with the Transformer blocks. A single GPU can be partitioned into multiple fully contained and isolated instances, with their own memory, cache & cores via MIG or Multi Instance GPU technology.
    3.2 GPU operations — A FLOP show
    Let us now talk about actual operations. A FLOP (Floating Point Operation) is a single floating-point calculation like an addition. Performance of a GPU is usually measured in TeraFLOP/s. Tera is a trillion, FLOP stands for floating-point operations and the ‘s’ stands for per second.
    Most matrix ops involve a multiply and an add. It makes sense to fuse these ops together to get an Fused Multiply-Add (FMA) op. If we know the FMA speed, we can simply double it to get the FLOP counts per clock. To get the peak FLOP/s rate, we multiply this by the clock rate & the number of SMs. Note that we have FP16, FP32, FP64 & Int8 cores with varying speeds. For e.g.:

    Say there are 4 tensor cores in each SM & 114 SMs in an H100
    Say each tensor core delivers 512 FP16 FMA ops per clock. Careful here: Read the specs clearly to check whether the FMA ops per clock metric is per SM or per individual core. For e.g., this link of A100 is per coreper SM
    Let the Clock speed = 1620 MHz
    So TFLOP/s = 1620 * (2*512) * 4 * 114= 756 TFLOP/s of performance! 756 Trillion operations per second. Wow! What would Babbage say to that?

    4. Putting everything together — LLM Operations in a GPU
    Given this immense compute-power, we can now make a reasonable guess that LLM inference is memory-I0 bound, not compute bound. In other words, it takes more time to load data to the GPU’s compute cores than it does for those cores to perform LLM computations on that data. The processing itself is super-fast & there is enough & more compute power available.

    To start with, the training data needs to be downloaded from a remote source to the CPU memory
    From there, it needs to be transferred to the GPU via the system bus and PCIe bus. The host(CPU)-to-device(GPU) bandwidth is limited by the CPU frequency, PCIe bus, GPU devices & the number of PCIe lanes available.
    Once the data & weights are in the GPU VRAM, they are then ferried across to the SRAM where the processors perform operations on it.
    After the operation the data is moved back to the VRAM & from there it is moved back to the CPU RAM. This is a rather simplistic view. Inside the GPU, the tensors are repeatedly moved back and forth between VRAM & SRAM (the memory allocated to an SM). Can you guess why?

    We saw that SRAM size is in KB so large matrices are not going to fit in there … which explains why there is a constant movement between VRAM which holds all the tensors and SRAM which holds the data on which compute operations are performed. So there is typically a memory-op where tensors are moved from VRAM to SRAM, then a compute-op SRAM and memory-op to move tensors back from SRAM to VRAM. Computations like a matrix multiplication involving 2 large matrices need several such memory + compute ops before the action is completed.
    During the training of GPT-3, the tensor cores on the GPUs used were found to be idle ~50% of the time. So, to extract the best from the infrastructure, data movement needs to be fast enough to ensure the computation cores are kept reasonably occupied. Surely, there is scope for some smart person to come up with shortcuts. Enter Flash attention & other such hacks. But that is a story for another day!
    5. Linking GPUs for LLM training — Topologies
    While LLM inferencing is manegable with a readymade collection of GPUs such as a DGX server (contains 8 H100s), LLM training needs far more GPUs. Before we discuss how to connect GPUs for larger workloads, it makes sense to see how CPU servers are connected in a datacentre. I am not an expert in this area, so please feel free to point out any incorrect interpretations I may have made from the references I quote.
    5.1 Generic concepts on linking processors
    Each server has a card attached to it called the Network Interface Card (NIC). RDMA technology enables direct memory access to a remote server via the NIC hardware. RoCE (RDMA over Converged Ethernet) protocol uses the RDMA technology & adapts it to Ethernet networks. So now, a server can talk to a remote server over a network. A network switch is a device connecting multiple servers in a network, enabling them to communicate with each other. This is the basic technology. Now let us come to the topology.
    So we assemble all the servers physically in one place and pile them up vertically them in neat racks.A very basic topology is to connect each server in a rack to a switch that usually sits on Top of the Rack, aptly named the ToR switch. The ToR switches of different racks are connected to a Spine switch. This topology is a basic implementation of Clos topology — named after Charles Clos who invented this scheme to originally arrange telephone nodes in a “leaf-n-spine” arrangement. The leaf switches are nothing but the ToR switches in modern data centers.
    Source: Fig 1–1 from https://www.oreilly.com/library/view/bgp-in-the/9781491983416/ch01.html
    Fat tree is a variant of Clos. Like before, we have servers arranged into racks connecting to Top-of-the-Rack (ToR) switches. ToR switches are connected to the aggregation switches to provide connectivity across racks, forming a pod. The pods are interconnected with spine switches, allowing any-to-any communication across servers. To be noted is the fact that there are multiple paths connecting servers. So there is lot of redundancy built-in.
    In a typical App deployment running hundreds of microservices on dozens of servers, it is useful to have such fully connected, high bandwidth networks. You never know who is going to talk to whom so it never hurts to overprovision on bandwidth and connectivity. However, network loads during AI training do not follow these patterns. They are more predictable & this allows us to build optimized, cheaper & less power-hungry networks.
    5.2 Linking GPUs via proprietary technology like NVLink
    We can strap together H100’s by leveraging the proprietary NVLink & NVSwitch technologies. NVLink provides the high-speed connection between individual GPUs, while NVSwitch is a chip that enables multiple GPUs to communicate through NVLink, forming a high-bandwidth network. See this nice article for details.
    NVIDIA’s P100 GPU introduced the NVLink1. At that time there was no NVSwitch chip, and the GPUs were connected in a ring-like configuration, which resulted in a lack of direct point-to-point communication between GPUs. The NVSwitch1 chip was introduced with the V100, followed by the NVSwitch2 chip with the A100 GPU. We are in the third-generation NVSwitch3 which can support a cluster of up to 256 H100 GPUs. Each H100 GPU in such a cluster is connected to the internal NVSwitch3 chip through 18 NVLink4.0 connections. This is how trillion parameter LLMs are inferenced.
    5.3 Linking GPUs via RoCE in a rail-optimized topology
    But as they say, ye dil mange more… Meta reportedly trains its newer models on a cluster that’s over 100K H100’s. Phew! How to they manage to link it all up? The standard NVLink tricks can only scale to a limited number of GPUs. Beyond that, we have to use the network topologies discussed earlier & fall back on technologies like RoCE, which allows data to be directly transferred from one GPU’s memory to another without involving the CPU.
    So you have 8 GPUs in one DGX server. You have several such DGX servers in the data centre. Each GPU is assigned a NIC (yes!) & connected via RDMA to all other GPUs thru’ a variant of Clos network called “rail-optimized network”. The idea here is to set up dedicated connections between groups of GPUs with rail switches. If a GPU wants to communicate with a GPU which is in a different group, then it has to go thru’ the spine switch (which takes a lil more time). To implement this, each GPU in a DGX server is indexed serially. A rail is the set of GPUs with the same index on different servers & these are interconnected with a rail switch via RDMA. These rail switches are subsequently connected to spine switches forming any-to-any GPU network.
    Source: Fig 1 from https://arxiv.org/pdf/2307.12169
    This topology streamlines traffic flow. It is like having dedicated lanes for high speed vehicles instead of generally mixing all traffic together. Rail paths are direct connections between a bunch of GPUs with same index. Spine switches serve as the connecting points for differently-indexed GPUs. For e.g., communication between GPU1 of server 1 and GPU1 of server 2 happens via their dedicated rail switch 1. If GPU1 of server 1 needs to reach GPU5 of another server, it has to go thru’ a spine switch.
    The workloads are designed so as to minimize data transfers across rails (since it has to go thru’ the extra spine switch). The good news is that this can be neatly done for AI training ensuring that most of the traffic stays within the rails, and does not cut across. In fact, there is a recent paper which suggests that you can consider removing costly spine switches altogether as inter-rail communication is minimal. Can you guess how?
    5.4 Linking GPUs via RoCE in a rail-only topology
    Well, we have the superfast connectivity using NVLink to communicate between a limited set of GPUs (upto 256). So you create these High Bandwith (HB) domains which use NVLink for communication. You have several such HB domains. We then have the same indexing system and rail connections to interconnect the HB domains. But there are no spine switches! Can you guess how GPU1 of HB domain 1 can talk to GPU5 of another HB domain? Yes! Transfer data via superfast NVLink to GPU5 of HB domain 1 first. Then use the dedicated rail of GPU5 to talk to the GPU5 in another HB domain! This is a rail-only topology as oppsed to rail-optimized topology!
    Given these topologies, we can now plan the training pipeline to have pipeline parallelism, tensor parallelism &/or data parallelism but that is a story for another day. See this, this & this for more details. 100K H100’s consume a LOT of power. Tech companies are exploring nuclear power options to generate clean energy needed for long term sustenance. Else, a 100K GPU cluster may have to be broken down to smaller clusters and connected using optical transceivers across the buildings in a campus.
    This (unplanned) article is a prelude to — Optimizing LLM inference: Key Faultlines & workarounds. To deeply understand how we can optimize LLM operations, we need to understand more about the silicon on which they are executed. Though there are lots of manuals/guides on individual aspects like memory, processors, networking etc, I couldn’t find a concise and reader-friendly thread linking together these various aspects & hence took a shot. This is the 9th of a 15-series article titled My LLM diaries.

    LLM Quantization — From concepts to implementation
    LoRA & its newer variants explained like never before
    In-Context learning: The greatest magic show in the kingdom of LLMs
    RAG in plain English — Summary of 100+ papers
    HNSW — Story of the world’s most popular Vector search algorithm
    VectorDB origins, Vamana & on-disk Vector search algorithms
    Taming LLMs — A study of few popular techniques
    Understanding LLM Agents: Concepts, Patterns & Frameworks
    Anatomy of a GPU — A peek into the hardware fuelling LLM operations
    Optimizing LLM Inference — Key Faultlines & workarounds
    LLM Serving — Architecture considerations
    LLM evaluation & other odds and ends
    Look Ma, LLMs without Prompt Engineering
    LLMs on the laptop — A peek into the Silicon
    Taking a step back — On model sentience, conscientiousness & other philosophical aspects

    Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

    Published via Towards AI



    المصدر: https://towardsai.net/p/machine-learning/gpu-architecture-working-intuitively-explained
    GPU Architecture & Working intuitively explained Author(s): Allohvk Originally published on Towards AI. GPU Origins The image displayed on a computer screen is made up of millions of tiny pixels. In early days, “graphics controllers” were given instructions by the CPU on how to calculate the individual pixel values so that the appropriate image could be displayed. These were ok for conventional displays but for a really good gaming experience, images need to be built dozens of times per second. The CPU was not really designed to handle these kind of loads. The whole process of creating the image could be parallelized big-time simply by (a) dividing the image into smaller blocks (b) carrying out computations for each block in parallel & (c) grouping them back again. The results of one block don’t influence the results of the other blocks. CPU’s multi-threading capabilities was not really conceived for such massive parallelization. Enter the GPU! Sony first used the term GPU in 1994, in its PlayStation consoles. The technology was perfected by NVIDIA which soon became a leader. GPUs have numerous computation cores (much more than a CPU) and gaming programmers could write Shaders — programs to run graphics computations on the GPU in a massively parallelized way to create the screen images in super-fast time. The GPU is inspired by the CPU but was specifically designed to enable massive multi-threaded operations on its numerous computation cores seamlessly. Creating threads, switching between threads etc is much faster on a GPU. Some smart developers also realized that these parallel processing capabilities could be used for other computationally intensive tasks as well! 2005: Steinkrau implements a simple 2-layer Neural Net on a GPU 2006: Kumar et. al. trains a CNN model for document processing 2007: NVIDIA released Compute Unified Device Architecture (CUDA) — a custom language extending C to exploit data parallelism on GPUs. Now developers had much more granular control over the image rendering. 2008 a landmark paper by Raina et al was released. This paper pretty much showed everyone how to train deep layers on a GPU 2014: NVIDIA released CuDNN — a dedicated CUDA library for Deep Learning. Very soon PyTorch, TensorFlow etc incorporated CuDNN, setting the stage for modern GPU usage for AI! A GPU is an ASIC or Application-Specific Integrated Circuit having a processor (hosting numerous computational cores), a memory soldered onto it (we want to avoid going to the CPU RAM for everything), a cooling system (well, they heat up pretty fast) and a BIOS chip (same role as a CPU — to store settings, run startup diagnostics etc). This card is then plugged into the motherboard slot using the PCI Express interface. The terms GPU and graphics card are often used interchangeably. Some GPUs like the one in Apple M3 do not have a dedicated memory but instead use the system RAM itself which is possible due to its unique design. Google has the TPU (Tensor Processing Unit) which is its own ASIC. We discuss the GPU memory, the processing cores, the LLM workflows happening inside them & common topologies for clustering. Photo by Thomas Foster on Unsplash 1. GPU Memory module — The VRAM Instead of having the GPU talk to the regular RAM, it made sense to create another RAM physically closer to the GPU die so that data retrieval is faster. So a graphics card has a memory called VRAM — Video Random Access Memory in addition to the computation engines . VRAM is connected to the computation engine cores via a Bus called the memory interface. 1.1 What is DRAM? Let us talk first of RAM technology in general. All memory whether it is the CPU RAM or the GPU VRAM are mostly based on DRAM technology which consists of a capacitor and a transistor. The capacitor’s charge represents the data stored. Due to its very nature, this charge gradually leaks. To prevent data loss, a refresh circuit periodically rewrites the data back, restoring its charge. Hence the name — Dynamic RAM due to these preiodic refreshes. Most computers use Synchronous DDR5 DRAM’s as their CPU RAMs. Synchronous because it utilizes the system clock for better performance. In other words the action (of retrieving & storing data) is operationally coordinated by an external clock signal. Tying the operations to the clock makes it faster. The processor knows the exact timing & number of cycles in which the data will be available from the RAM to the bus & can plan better. We have DDR1 (1st Gen Double Data Rate Synchronous Dynamic RAM released in 2000) to DDR5 which is the choice of CPU RAM as of today. 1.2 What is SGRAM? Let us now talk about the VRAMs in GPUs. The VRAM is a type of SGRAM — Synchronous Graphics RAM. The current generation of VRAMs being used is GDDR6. Yes, this is 6th generation GDDR, the G standing for “Graphics”. While DDR & GDDR share common origins and early couple of generations were similar, the branches separated after DDR3. So as of 2025, DDR5 rules in CPU RAM and GDDR6 rules for consumer-grade GPU RAMs. Conceptually DDRs and GDDRs are similar but note that DDRs are used by CPUs which need low latency whereas GDDRs are used by GPUs which are OK to compromise latency for extremely high throughput. Crudely, the former has more frequent smaller calculations & the latter deals with much higher volume of data & some delays are forgiven considering the vast volumes of data being processed. Even more crudely, the former is a bullet train with 6–8 coaches while the latter a 3 Kilometre long goods train. 1.3 GDDR VRAMs explained in detail GDDR memory are individual chips soldered to the PCB (Printed Circuit Board) very close to the GPU die. The physical proximity improves the speed of data transfer from the VRAM to the GPU processor. There are pins in a GDDR which can be thought of as individual wires that connect it to the processor. Bus width is literally the number of such connections. GDDR6 has 32 pins spread across 2 channels with roughly 16 Gbits.p.s bandwidth per pin. Bandwidth is total amount of data being moved & if you had one single metric at your disposal to take a decision, it would be this. Before we go further, let us try to understand this metric intuitively. 1.4 Calculating GPU Memory Bandwidth intuitively Memory Bandwidth is the max rate at which data can be transferred between the GPU and the VRAM. We discussed that data transmission is synchronized with the clock. The clock cycle is measured in hertz & represents the number of cycles per second. Let us say we have a clock operating at 1000 MHz. This literally means 1 billion clock ticks per second. How long does a tick last? Literally 1/(1 billion) i.e. 1 nano second. Data is sent to and fro every clock cycle. So every nano-second, a bus-full of data is sent from the VRAM to the processor & vice versa. How many seats on the bus? Well, we discussed this earlier… This is the memory interface or the bus width… literally the physical count of bits that fit into the bus. A 128-bit bus would ferry 128 bits every nano-second. The D in G’D’DR6 stands for Double. Basically, data is transmitted on both the rising and falling edges of the clock cycle, so 256 bits every nano-second. How many bytes in 1 sec? 256/8 i.e. 32 billion bytes per second or better still 32 GB/s as Giga is the preferred term when measuring data. The capital B denotes bytes whereas the small b denotes bits… a source of confusion. A more practical formula is: Bandwidth = Clock * Bus Width x Data Rate, where the Data Rate is the number of data transfers per cycle. GDDR6 is Double Data Rate (as just discussed) and Quad pumped, which quadruples the (doubled) speed. So effectively the Data Rate is 8. Sometimes, you may encounter the same information crouched in different semantics. E.g., if frequency of command clock (CK#) is N, then the write command clock (WK#) is 2N. GDDR6 rates then are QDR (quad data rate) in reference to WK# and ODR (Octal Data Rate) in reference to the CK#. Some OEMs multiply the clock speed & data rate & call it a clock rate or something. In that case, the bandwidth is simply that number multiplied by the bus width. In general, this raw formula can be used: num_of_transfers per second * num_of_bits per transfer / 8. “Boost clock” mechanism allows the GPU and GDDR memory to operate at even higher speeds than the default clock when conditions allow it. Boost clock metric refers to the max such operating clock speed. A 1750 MHz clock means: 1.75GHz is the frequency of command clock(CK#). The frequency of the write clock (WK#) is 3.5GHz due to the G”D”DR The Quad pumping takes it to 3.5*4=14 G bits moved in 1 second from each pin on the bus. We could have bus widths of up to 384 bits! So we get a bandwidth of 14*384 Giga bits per second. Divide by 8 to get 672 GB/s. GDDR6 bandwidth can go upto 1 TB/s. Wow! 1.5 What is HBM VRAM in a GPU? When reading or writing data, contention is created when the VRAM has occupied memory channels & is busy receiving or delivering other data. This contention creates latency & this affects bandwidth. Increasing the number of memory channels is a great option. A type of memory called HBM (High-Bandwidth Memory) has lower access latency than GDDR6, since it has 8-memory channels versus 2 channels in GDDR6. HBM also has a wider bus. HBM has 1024 pins spread across 8 channels of 128 pins with roughly 2 Gbits.p.s bandwidth per pin. Compare this with (an equivalent) GDDR which has 32 pins spread across 2 channels with roughly 16 Gbits. p.s bandwidth per pin. Notice how HBM keeps the Gbit/sec per pin much lower than GDDR. This saves power (which is important as we shall see). In spite of this, it has higher bandwidth than GDDR6 due to the wider bus & higher channels. As we discussed, a pin is literally a wire connecting the VRAM to the processor. Having 1024 wires connected from the processor to the VRAM is not possible on a standard PCB. Therefore, an “interposer” is used as an intermediary to connect the VRAM & the processor. Just like a regular IC, wires (connections) are etched in this silicon “interposer” in the desired quantity. After this, the HBM device(s) & the processor are mounted atop this “interposer”. The slightly twisted workaround is called a 2.5D architecture.Another difference is that while GDDR chips are soldered to the PCB surrounding the GPU die, an HBM structure is a vertical stack of DRAMs like a high rise building. The stacked memory dies are linked using microscopic wires with TSV (Through-Silicon Vias) which are vertical electrical connections giving super fast connectivity between the DRAMs. There are huge challenges to stacking items vertically especially around designing heat sinks & managing thermal safety but somehow HBM manufacturers have made this happen. HBM has become a gold standard today for AI data centers. It was introduced to the Market by SK Hynix in 2013. Today, we have the 3rd generation HBM3 and their main client is Nvidia. Due to investments made way back, SK Hynix is leading the pack along with Samsung and a relatively recent entrant named Micron. We hear a lot about chips and TSMC but HBM is a key technology to watch out for in the coming years. We typically have more than one HBM devices inside the GPU die. GDDR6 co-exists with HBM3. The markets are complementary. The former addresses PCs & other consumer GPUs whereas the latter addresses data center GPUs. Ultra large scale AI deployments like ChatGPT likely leverage the use of a cluster of NVIDIA GPUs working in tandem. Connecting such GPU’s involves the use of NVIDIA NVLink technology which requires fast GPU memory bandwidth speeds and it’s the reason why HBM is prevalent in such systems. If not for the wide bus width and fast data transfer rates offered by HBM, these kind of clusters would be very difficult to design. Besides the VRAM, GPUs also include high-speed memory caches that are even closer to the GPU’s processing cores. There is a physical limit to the sizes of these caches. An L1 cache is usually in KB and an L2 cache is usually a few MB. Different hardware & software strategies exist to keep the most useful, and most reused data present in caches. 2. Cooling Mechanisms in a GPU Higher clock speeds generally result in increased heat generation necessitating the need for cooling solutions to maintain optimal operating temperatures. Usual cooling methods are: Passive Cooling: These do not have any powered moving components. They take advantage of optimized airflow to take heat away. Fans are used to dissipate heat by blowing cool air across the heat sinks, which are metal components designed to absorb & disperse heat In water cooling, water is circulated through the GPU surface using pipes & a radiator. The hot liquid running through the pipes is in turn cooled down by the radiator fan. Hybrid cooling — which uses a combination of the above 3. GPU Computation cores — Processors Let us now talk about the processors on the GPU. Unlike CPUs which contain only a few cores, the GPU literally has 1000’s of cores & specializes in running tasks in parallel across these cores using SIMD (Single Instruction, Multiple Data) units. Let us stick to NVIDIA terminology. There are multiple processing units called Streaming Multiprocessor (SM) on a NVIDIA GPU. For e.g. an H100 has upto 144 SMs. What is inside an SM? Well there are mainly 2 type of execution units — CUDA cores & Tensor cores. There is also a small memory SRAM which is Shared between all threads running in that SM. More specifically, every SM has a few KB memory that is partitioned between L1 cache & Shared Memory usage. 3.1 CUDA core versus Tensor core in a GPU — The difference Tensor cores are a pretty recent innovation (from V100 onwards) and are specifically designed for faster matrix multiplication. Let us discuss CUDA cores first. These are the computation engines for regular math operations. Each CUDA core can execute one operation per clock cycle. But their strength lies in parallel processing. Many CUDA cores working together can accelerate computation by executing processes in parallel. Tensor Cores are specialized hardware units designed to accelerate “mixed precision” training. The earliest version allowed 4×4 FP16 matrices to be multiplied & added to an FP32 output matrix. By using lower-precision FP16 inputs in the computations, the calculations are vastly accelarated & by retaining FP32 outputs for the rest of the procedure, accuracy is not compromised too much. Modern tensor cores use even lower precision formats in DL computations. See this for more details. There may also specialized units like the transformer engine designed to accelerate models built with the Transformer blocks. A single GPU can be partitioned into multiple fully contained and isolated instances, with their own memory, cache & cores via MIG or Multi Instance GPU technology. 3.2 GPU operations — A FLOP show Let us now talk about actual operations. A FLOP (Floating Point Operation) is a single floating-point calculation like an addition. Performance of a GPU is usually measured in TeraFLOP/s. Tera is a trillion, FLOP stands for floating-point operations and the ‘s’ stands for per second. Most matrix ops involve a multiply and an add. It makes sense to fuse these ops together to get an Fused Multiply-Add (FMA) op. If we know the FMA speed, we can simply double it to get the FLOP counts per clock. To get the peak FLOP/s rate, we multiply this by the clock rate & the number of SMs. Note that we have FP16, FP32, FP64 & Int8 cores with varying speeds. For e.g.: Say there are 4 tensor cores in each SM & 114 SMs in an H100 Say each tensor core delivers 512 FP16 FMA ops per clock. Careful here: Read the specs clearly to check whether the FMA ops per clock metric is per SM or per individual core. For e.g., this link of A100 is per coreper SM Let the Clock speed = 1620 MHz So TFLOP/s = 1620 * (2*512) * 4 * 114= 756 TFLOP/s of performance! 756 Trillion operations per second. Wow! What would Babbage say to that? 4. Putting everything together — LLM Operations in a GPU Given this immense compute-power, we can now make a reasonable guess that LLM inference is memory-I0 bound, not compute bound. In other words, it takes more time to load data to the GPU’s compute cores than it does for those cores to perform LLM computations on that data. The processing itself is super-fast & there is enough & more compute power available. To start with, the training data needs to be downloaded from a remote source to the CPU memory From there, it needs to be transferred to the GPU via the system bus and PCIe bus. The host(CPU)-to-device(GPU) bandwidth is limited by the CPU frequency, PCIe bus, GPU devices & the number of PCIe lanes available. Once the data & weights are in the GPU VRAM, they are then ferried across to the SRAM where the processors perform operations on it. After the operation the data is moved back to the VRAM & from there it is moved back to the CPU RAM. This is a rather simplistic view. Inside the GPU, the tensors are repeatedly moved back and forth between VRAM & SRAM (the memory allocated to an SM). Can you guess why? We saw that SRAM size is in KB so large matrices are not going to fit in there … which explains why there is a constant movement between VRAM which holds all the tensors and SRAM which holds the data on which compute operations are performed. So there is typically a memory-op where tensors are moved from VRAM to SRAM, then a compute-op SRAM and memory-op to move tensors back from SRAM to VRAM. Computations like a matrix multiplication involving 2 large matrices need several such memory + compute ops before the action is completed. During the training of GPT-3, the tensor cores on the GPUs used were found to be idle ~50% of the time. So, to extract the best from the infrastructure, data movement needs to be fast enough to ensure the computation cores are kept reasonably occupied. Surely, there is scope for some smart person to come up with shortcuts. Enter Flash attention & other such hacks. But that is a story for another day! 5. Linking GPUs for LLM training — Topologies While LLM inferencing is manegable with a readymade collection of GPUs such as a DGX server (contains 8 H100s), LLM training needs far more GPUs. Before we discuss how to connect GPUs for larger workloads, it makes sense to see how CPU servers are connected in a datacentre. I am not an expert in this area, so please feel free to point out any incorrect interpretations I may have made from the references I quote. 5.1 Generic concepts on linking processors Each server has a card attached to it called the Network Interface Card (NIC). RDMA technology enables direct memory access to a remote server via the NIC hardware. RoCE (RDMA over Converged Ethernet) protocol uses the RDMA technology & adapts it to Ethernet networks. So now, a server can talk to a remote server over a network. A network switch is a device connecting multiple servers in a network, enabling them to communicate with each other. This is the basic technology. Now let us come to the topology. So we assemble all the servers physically in one place and pile them up vertically them in neat racks.A very basic topology is to connect each server in a rack to a switch that usually sits on Top of the Rack, aptly named the ToR switch. The ToR switches of different racks are connected to a Spine switch. This topology is a basic implementation of Clos topology — named after Charles Clos who invented this scheme to originally arrange telephone nodes in a “leaf-n-spine” arrangement. The leaf switches are nothing but the ToR switches in modern data centers. Source: Fig 1–1 from https://www.oreilly.com/library/view/bgp-in-the/9781491983416/ch01.html Fat tree is a variant of Clos. Like before, we have servers arranged into racks connecting to Top-of-the-Rack (ToR) switches. ToR switches are connected to the aggregation switches to provide connectivity across racks, forming a pod. The pods are interconnected with spine switches, allowing any-to-any communication across servers. To be noted is the fact that there are multiple paths connecting servers. So there is lot of redundancy built-in. In a typical App deployment running hundreds of microservices on dozens of servers, it is useful to have such fully connected, high bandwidth networks. You never know who is going to talk to whom so it never hurts to overprovision on bandwidth and connectivity. However, network loads during AI training do not follow these patterns. They are more predictable & this allows us to build optimized, cheaper & less power-hungry networks. 5.2 Linking GPUs via proprietary technology like NVLink We can strap together H100’s by leveraging the proprietary NVLink & NVSwitch technologies. NVLink provides the high-speed connection between individual GPUs, while NVSwitch is a chip that enables multiple GPUs to communicate through NVLink, forming a high-bandwidth network. See this nice article for details. NVIDIA’s P100 GPU introduced the NVLink1. At that time there was no NVSwitch chip, and the GPUs were connected in a ring-like configuration, which resulted in a lack of direct point-to-point communication between GPUs. The NVSwitch1 chip was introduced with the V100, followed by the NVSwitch2 chip with the A100 GPU. We are in the third-generation NVSwitch3 which can support a cluster of up to 256 H100 GPUs. Each H100 GPU in such a cluster is connected to the internal NVSwitch3 chip through 18 NVLink4.0 connections. This is how trillion parameter LLMs are inferenced. 5.3 Linking GPUs via RoCE in a rail-optimized topology But as they say, ye dil mange more… Meta reportedly trains its newer models on a cluster that’s over 100K H100’s. Phew! How to they manage to link it all up? The standard NVLink tricks can only scale to a limited number of GPUs. Beyond that, we have to use the network topologies discussed earlier & fall back on technologies like RoCE, which allows data to be directly transferred from one GPU’s memory to another without involving the CPU. So you have 8 GPUs in one DGX server. You have several such DGX servers in the data centre. Each GPU is assigned a NIC (yes!) & connected via RDMA to all other GPUs thru’ a variant of Clos network called “rail-optimized network”. The idea here is to set up dedicated connections between groups of GPUs with rail switches. If a GPU wants to communicate with a GPU which is in a different group, then it has to go thru’ the spine switch (which takes a lil more time). To implement this, each GPU in a DGX server is indexed serially. A rail is the set of GPUs with the same index on different servers & these are interconnected with a rail switch via RDMA. These rail switches are subsequently connected to spine switches forming any-to-any GPU network. Source: Fig 1 from https://arxiv.org/pdf/2307.12169 This topology streamlines traffic flow. It is like having dedicated lanes for high speed vehicles instead of generally mixing all traffic together. Rail paths are direct connections between a bunch of GPUs with same index. Spine switches serve as the connecting points for differently-indexed GPUs. For e.g., communication between GPU1 of server 1 and GPU1 of server 2 happens via their dedicated rail switch 1. If GPU1 of server 1 needs to reach GPU5 of another server, it has to go thru’ a spine switch. The workloads are designed so as to minimize data transfers across rails (since it has to go thru’ the extra spine switch). The good news is that this can be neatly done for AI training ensuring that most of the traffic stays within the rails, and does not cut across. In fact, there is a recent paper which suggests that you can consider removing costly spine switches altogether as inter-rail communication is minimal. Can you guess how? 5.4 Linking GPUs via RoCE in a rail-only topology Well, we have the superfast connectivity using NVLink to communicate between a limited set of GPUs (upto 256). So you create these High Bandwith (HB) domains which use NVLink for communication. You have several such HB domains. We then have the same indexing system and rail connections to interconnect the HB domains. But there are no spine switches! Can you guess how GPU1 of HB domain 1 can talk to GPU5 of another HB domain? Yes! Transfer data via superfast NVLink to GPU5 of HB domain 1 first. Then use the dedicated rail of GPU5 to talk to the GPU5 in another HB domain! This is a rail-only topology as oppsed to rail-optimized topology! Given these topologies, we can now plan the training pipeline to have pipeline parallelism, tensor parallelism &/or data parallelism but that is a story for another day. See this, this & this for more details. 100K H100’s consume a LOT of power. Tech companies are exploring nuclear power options to generate clean energy needed for long term sustenance. Else, a 100K GPU cluster may have to be broken down to smaller clusters and connected using optical transceivers across the buildings in a campus. This (unplanned) article is a prelude to — Optimizing LLM inference: Key Faultlines & workarounds. To deeply understand how we can optimize LLM operations, we need to understand more about the silicon on which they are executed. Though there are lots of manuals/guides on individual aspects like memory, processors, networking etc, I couldn’t find a concise and reader-friendly thread linking together these various aspects & hence took a shot. This is the 9th of a 15-series article titled My LLM diaries. LLM Quantization — From concepts to implementation LoRA & its newer variants explained like never before In-Context learning: The greatest magic show in the kingdom of LLMs RAG in plain English — Summary of 100+ papers HNSW — Story of the world’s most popular Vector search algorithm VectorDB origins, Vamana & on-disk Vector search algorithms Taming LLMs — A study of few popular techniques Understanding LLM Agents: Concepts, Patterns & Frameworks Anatomy of a GPU — A peek into the hardware fuelling LLM operations Optimizing LLM Inference — Key Faultlines & workarounds LLM Serving — Architecture considerations LLM evaluation & other odds and ends Look Ma, LLMs without Prompt Engineering LLMs on the laptop — A peek into the Silicon Taking a step back — On model sentience, conscientiousness & other philosophical aspects Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI المصدر: https://towardsai.net/p/machine-learning/gpu-architecture-working-intuitively-explained
    TOWARDSAI.NET
    GPU Architecture & Working intuitively explained
    Author(s): Allohvk Originally published on Towards AI. GPU Origins The image displayed on a computer screen is made up of millions of tiny pixels. In early days, “graphics controllers” were given instructions by the CPU on how to calculate the individual pixel values so that the appropriate image could be displayed. These were ok for conventional displays but for a really good gaming experience, images need to be built dozens of times per second. The CPU was not really designed to handle these kind of loads. The whole process of creating the image could be parallelized big-time simply by (a) dividing the image into smaller blocks (b) carrying out computations for each block in parallel & (c) grouping them back again. The results of one block don’t influence the results of the other blocks. CPU’s multi-threading capabilities was not really conceived for such massive parallelization. Enter the GPU! Sony first used the term GPU in 1994, in its PlayStation consoles. The technology was perfected by NVIDIA which soon became a leader. GPUs have numerous computation cores (much more than a CPU) and gaming programmers could write Shaders — programs to run graphics computations on the GPU in a massively parallelized way to create the screen images in super-fast time. The GPU is inspired by the CPU but was specifically designed to enable massive multi-threaded operations on its numerous computation cores seamlessly. Creating threads, switching between threads etc is much faster on a GPU. Some smart developers also realized that these parallel processing capabilities could be used for other computationally intensive tasks as well! 2005: Steinkrau implements a simple 2-layer Neural Net on a GPU 2006: Kumar et. al. trains a CNN model for document processing 2007: NVIDIA released Compute Unified Device Architecture (CUDA) — a custom language extending C to exploit data parallelism on GPUs. Now developers had much more granular control over the image rendering. 2008 a landmark paper by Raina et al was released. This paper pretty much showed everyone how to train deep layers on a GPU 2014: NVIDIA released CuDNN — a dedicated CUDA library for Deep Learning. Very soon PyTorch, TensorFlow etc incorporated CuDNN, setting the stage for modern GPU usage for AI! A GPU is an ASIC or Application-Specific Integrated Circuit having a processor (hosting numerous computational cores), a memory soldered onto it (we want to avoid going to the CPU RAM for everything), a cooling system (well, they heat up pretty fast) and a BIOS chip (same role as a CPU — to store settings, run startup diagnostics etc). This card is then plugged into the motherboard slot using the PCI Express interface. The terms GPU and graphics card are often used interchangeably. Some GPUs like the one in Apple M3 do not have a dedicated memory but instead use the system RAM itself which is possible due to its unique design. Google has the TPU (Tensor Processing Unit) which is its own ASIC. We discuss the GPU memory, the processing cores, the LLM workflows happening inside them & common topologies for clustering. Photo by Thomas Foster on Unsplash 1. GPU Memory module — The VRAM Instead of having the GPU talk to the regular RAM, it made sense to create another RAM physically closer to the GPU die so that data retrieval is faster. So a graphics card has a memory called VRAM — Video Random Access Memory in addition to the computation engines . VRAM is connected to the computation engine cores via a Bus called the memory interface. 1.1 What is DRAM? Let us talk first of RAM technology in general. All memory whether it is the CPU RAM or the GPU VRAM are mostly based on DRAM technology which consists of a capacitor and a transistor. The capacitor’s charge represents the data stored. Due to its very nature, this charge gradually leaks. To prevent data loss, a refresh circuit periodically rewrites the data back, restoring its charge. Hence the name — Dynamic RAM due to these preiodic refreshes. Most computers use Synchronous DDR5 DRAM’s as their CPU RAMs. Synchronous because it utilizes the system clock for better performance. In other words the action (of retrieving & storing data) is operationally coordinated by an external clock signal. Tying the operations to the clock makes it faster. The processor knows the exact timing & number of cycles in which the data will be available from the RAM to the bus & can plan better. We have DDR1 (1st Gen Double Data Rate Synchronous Dynamic RAM released in 2000) to DDR5 which is the choice of CPU RAM as of today. 1.2 What is SGRAM? Let us now talk about the VRAMs in GPUs. The VRAM is a type of SGRAM — Synchronous Graphics RAM. The current generation of VRAMs being used is GDDR6. Yes, this is 6th generation GDDR, the G standing for “Graphics”. While DDR & GDDR share common origins and early couple of generations were similar, the branches separated after DDR3. So as of 2025, DDR5 rules in CPU RAM and GDDR6 rules for consumer-grade GPU RAMs. Conceptually DDRs and GDDRs are similar but note that DDRs are used by CPUs which need low latency whereas GDDRs are used by GPUs which are OK to compromise latency for extremely high throughput. Crudely, the former has more frequent smaller calculations & the latter deals with much higher volume of data & some delays are forgiven considering the vast volumes of data being processed. Even more crudely, the former is a bullet train with 6–8 coaches while the latter a 3 Kilometre long goods train. 1.3 GDDR VRAMs explained in detail GDDR memory are individual chips soldered to the PCB (Printed Circuit Board) very close to the GPU die. The physical proximity improves the speed of data transfer from the VRAM to the GPU processor. There are pins in a GDDR which can be thought of as individual wires that connect it to the processor. Bus width is literally the number of such connections. GDDR6 has 32 pins spread across 2 channels with roughly 16 Gbits.p.s bandwidth per pin. Bandwidth is total amount of data being moved & if you had one single metric at your disposal to take a decision, it would be this. Before we go further, let us try to understand this metric intuitively. 1.4 Calculating GPU Memory Bandwidth intuitively Memory Bandwidth is the max rate at which data can be transferred between the GPU and the VRAM. We discussed that data transmission is synchronized with the clock. The clock cycle is measured in hertz & represents the number of cycles per second. Let us say we have a clock operating at 1000 MHz. This literally means 1 billion clock ticks per second. How long does a tick last? Literally 1/(1 billion) i.e. 1 nano second. Data is sent to and fro every clock cycle. So every nano-second, a bus-full of data is sent from the VRAM to the processor & vice versa. How many seats on the bus? Well, we discussed this earlier… This is the memory interface or the bus width… literally the physical count of bits that fit into the bus. A 128-bit bus would ferry 128 bits every nano-second. The D in G’D’DR6 stands for Double. Basically, data is transmitted on both the rising and falling edges of the clock cycle, so 256 bits every nano-second. How many bytes in 1 sec? 256/8 i.e. 32 billion bytes per second or better still 32 GB/s as Giga is the preferred term when measuring data. The capital B denotes bytes whereas the small b denotes bits… a source of confusion. A more practical formula is: Bandwidth = Clock * Bus Width x Data Rate, where the Data Rate is the number of data transfers per cycle. GDDR6 is Double Data Rate (as just discussed) and Quad pumped, which quadruples the (doubled) speed. So effectively the Data Rate is 8. Sometimes, you may encounter the same information crouched in different semantics. E.g., if frequency of command clock (CK#) is N, then the write command clock (WK#) is 2N. GDDR6 rates then are QDR (quad data rate) in reference to WK# and ODR (Octal Data Rate) in reference to the CK#. Some OEMs multiply the clock speed & data rate & call it a clock rate or something. In that case, the bandwidth is simply that number multiplied by the bus width. In general, this raw formula can be used: num_of_transfers per second * num_of_bits per transfer / 8. “Boost clock” mechanism allows the GPU and GDDR memory to operate at even higher speeds than the default clock when conditions allow it. Boost clock metric refers to the max such operating clock speed. A 1750 MHz clock means: 1.75GHz is the frequency of command clock(CK#). The frequency of the write clock (WK#) is 3.5GHz due to the G”D”DR The Quad pumping takes it to 3.5*4=14 G bits moved in 1 second from each pin on the bus. We could have bus widths of up to 384 bits! So we get a bandwidth of 14*384 Giga bits per second. Divide by 8 to get 672 GB/s. GDDR6 bandwidth can go upto 1 TB/s. Wow! 1.5 What is HBM VRAM in a GPU? When reading or writing data, contention is created when the VRAM has occupied memory channels & is busy receiving or delivering other data. This contention creates latency & this affects bandwidth. Increasing the number of memory channels is a great option. A type of memory called HBM (High-Bandwidth Memory) has lower access latency than GDDR6, since it has 8-memory channels versus 2 channels in GDDR6. HBM also has a wider bus. HBM has 1024 pins spread across 8 channels of 128 pins with roughly 2 Gbits.p.s bandwidth per pin. Compare this with (an equivalent) GDDR which has 32 pins spread across 2 channels with roughly 16 Gbits. p.s bandwidth per pin. Notice how HBM keeps the Gbit/sec per pin much lower than GDDR. This saves power (which is important as we shall see). In spite of this, it has higher bandwidth than GDDR6 due to the wider bus & higher channels. As we discussed, a pin is literally a wire connecting the VRAM to the processor. Having 1024 wires connected from the processor to the VRAM is not possible on a standard PCB. Therefore, an “interposer” is used as an intermediary to connect the VRAM & the processor. Just like a regular IC, wires (connections) are etched in this silicon “interposer” in the desired quantity. After this, the HBM device(s) & the processor are mounted atop this “interposer”. The slightly twisted workaround is called a 2.5D architecture.Another difference is that while GDDR chips are soldered to the PCB surrounding the GPU die, an HBM structure is a vertical stack of DRAMs like a high rise building. The stacked memory dies are linked using microscopic wires with TSV (Through-Silicon Vias) which are vertical electrical connections giving super fast connectivity between the DRAMs. There are huge challenges to stacking items vertically especially around designing heat sinks & managing thermal safety but somehow HBM manufacturers have made this happen. HBM has become a gold standard today for AI data centers. It was introduced to the Market by SK Hynix in 2013. Today, we have the 3rd generation HBM3 and their main client is Nvidia. Due to investments made way back, SK Hynix is leading the pack along with Samsung and a relatively recent entrant named Micron. We hear a lot about chips and TSMC but HBM is a key technology to watch out for in the coming years. We typically have more than one HBM devices inside the GPU die. GDDR6 co-exists with HBM3. The markets are complementary. The former addresses PCs & other consumer GPUs whereas the latter addresses data center GPUs. Ultra large scale AI deployments like ChatGPT likely leverage the use of a cluster of NVIDIA GPUs working in tandem. Connecting such GPU’s involves the use of NVIDIA NVLink technology which requires fast GPU memory bandwidth speeds and it’s the reason why HBM is prevalent in such systems. If not for the wide bus width and fast data transfer rates offered by HBM, these kind of clusters would be very difficult to design. Besides the VRAM, GPUs also include high-speed memory caches that are even closer to the GPU’s processing cores. There is a physical limit to the sizes of these caches. An L1 cache is usually in KB and an L2 cache is usually a few MB. Different hardware & software strategies exist to keep the most useful, and most reused data present in caches. 2. Cooling Mechanisms in a GPU Higher clock speeds generally result in increased heat generation necessitating the need for cooling solutions to maintain optimal operating temperatures. Usual cooling methods are: Passive Cooling: These do not have any powered moving components. They take advantage of optimized airflow to take heat away. Fans are used to dissipate heat by blowing cool air across the heat sinks, which are metal components designed to absorb & disperse heat In water cooling, water is circulated through the GPU surface using pipes & a radiator. The hot liquid running through the pipes is in turn cooled down by the radiator fan. Hybrid cooling — which uses a combination of the above 3. GPU Computation cores — Processors Let us now talk about the processors on the GPU. Unlike CPUs which contain only a few cores, the GPU literally has 1000’s of cores & specializes in running tasks in parallel across these cores using SIMD (Single Instruction, Multiple Data) units. Let us stick to NVIDIA terminology. There are multiple processing units called Streaming Multiprocessor (SM) on a NVIDIA GPU. For e.g. an H100 has upto 144 SMs. What is inside an SM? Well there are mainly 2 type of execution units — CUDA cores & Tensor cores. There is also a small memory SRAM which is Shared between all threads running in that SM. More specifically, every SM has a few KB memory that is partitioned between L1 cache & Shared Memory usage. 3.1 CUDA core versus Tensor core in a GPU — The difference Tensor cores are a pretty recent innovation (from V100 onwards) and are specifically designed for faster matrix multiplication. Let us discuss CUDA cores first. These are the computation engines for regular math operations. Each CUDA core can execute one operation per clock cycle. But their strength lies in parallel processing. Many CUDA cores working together can accelerate computation by executing processes in parallel. Tensor Cores are specialized hardware units designed to accelerate “mixed precision” training. The earliest version allowed 4×4 FP16 matrices to be multiplied & added to an FP32 output matrix. By using lower-precision FP16 inputs in the computations, the calculations are vastly accelarated & by retaining FP32 outputs for the rest of the procedure, accuracy is not compromised too much. Modern tensor cores use even lower precision formats in DL computations. See this for more details. There may also specialized units like the transformer engine designed to accelerate models built with the Transformer blocks. A single GPU can be partitioned into multiple fully contained and isolated instances, with their own memory, cache & cores via MIG or Multi Instance GPU technology. 3.2 GPU operations — A FLOP show Let us now talk about actual operations. A FLOP (Floating Point Operation) is a single floating-point calculation like an addition. Performance of a GPU is usually measured in TeraFLOP/s. Tera is a trillion, FLOP stands for floating-point operations and the ‘s’ stands for per second. Most matrix ops involve a multiply and an add. It makes sense to fuse these ops together to get an Fused Multiply-Add (FMA) op. If we know the FMA speed, we can simply double it to get the FLOP counts per clock. To get the peak FLOP/s rate, we multiply this by the clock rate & the number of SMs. Note that we have FP16, FP32, FP64 & Int8 cores with varying speeds. For e.g.: Say there are 4 tensor cores in each SM & 114 SMs in an H100 Say each tensor core delivers 512 FP16 FMA ops per clock. Careful here: Read the specs clearly to check whether the FMA ops per clock metric is per SM or per individual core. For e.g., this link of A100 is per coreper SM Let the Clock speed = 1620 MHz So TFLOP/s = 1620 * (2*512) * 4 * 114= 756 TFLOP/s of performance! 756 Trillion operations per second. Wow! What would Babbage say to that? 4. Putting everything together — LLM Operations in a GPU Given this immense compute-power, we can now make a reasonable guess that LLM inference is memory-I0 bound, not compute bound. In other words, it takes more time to load data to the GPU’s compute cores than it does for those cores to perform LLM computations on that data. The processing itself is super-fast & there is enough & more compute power available. To start with, the training data needs to be downloaded from a remote source to the CPU memory From there, it needs to be transferred to the GPU via the system bus and PCIe bus. The host(CPU)-to-device(GPU) bandwidth is limited by the CPU frequency, PCIe bus, GPU devices & the number of PCIe lanes available. Once the data & weights are in the GPU VRAM, they are then ferried across to the SRAM where the processors perform operations on it. After the operation the data is moved back to the VRAM & from there it is moved back to the CPU RAM. This is a rather simplistic view. Inside the GPU, the tensors are repeatedly moved back and forth between VRAM & SRAM (the memory allocated to an SM). Can you guess why? We saw that SRAM size is in KB so large matrices are not going to fit in there … which explains why there is a constant movement between VRAM which holds all the tensors and SRAM which holds the data on which compute operations are performed. So there is typically a memory-op where tensors are moved from VRAM to SRAM, then a compute-op SRAM and memory-op to move tensors back from SRAM to VRAM. Computations like a matrix multiplication involving 2 large matrices need several such memory + compute ops before the action is completed. During the training of GPT-3, the tensor cores on the GPUs used were found to be idle ~50% of the time. So, to extract the best from the infrastructure, data movement needs to be fast enough to ensure the computation cores are kept reasonably occupied. Surely, there is scope for some smart person to come up with shortcuts. Enter Flash attention & other such hacks. But that is a story for another day! 5. Linking GPUs for LLM training — Topologies While LLM inferencing is manegable with a readymade collection of GPUs such as a DGX server (contains 8 H100s), LLM training needs far more GPUs. Before we discuss how to connect GPUs for larger workloads, it makes sense to see how CPU servers are connected in a datacentre. I am not an expert in this area, so please feel free to point out any incorrect interpretations I may have made from the references I quote. 5.1 Generic concepts on linking processors Each server has a card attached to it called the Network Interface Card (NIC). RDMA technology enables direct memory access to a remote server via the NIC hardware. RoCE (RDMA over Converged Ethernet) protocol uses the RDMA technology & adapts it to Ethernet networks. So now, a server can talk to a remote server over a network. A network switch is a device connecting multiple servers in a network, enabling them to communicate with each other. This is the basic technology. Now let us come to the topology. So we assemble all the servers physically in one place and pile them up vertically them in neat racks.A very basic topology is to connect each server in a rack to a switch that usually sits on Top of the Rack, aptly named the ToR switch. The ToR switches of different racks are connected to a Spine switch. This topology is a basic implementation of Clos topology — named after Charles Clos who invented this scheme to originally arrange telephone nodes in a “leaf-n-spine” arrangement. The leaf switches are nothing but the ToR switches in modern data centers. Source: Fig 1–1 from https://www.oreilly.com/library/view/bgp-in-the/9781491983416/ch01.html Fat tree is a variant of Clos. Like before, we have servers arranged into racks connecting to Top-of-the-Rack (ToR) switches. ToR switches are connected to the aggregation switches to provide connectivity across racks, forming a pod. The pods are interconnected with spine switches, allowing any-to-any communication across servers. To be noted is the fact that there are multiple paths connecting servers. So there is lot of redundancy built-in. In a typical App deployment running hundreds of microservices on dozens of servers, it is useful to have such fully connected, high bandwidth networks. You never know who is going to talk to whom so it never hurts to overprovision on bandwidth and connectivity. However, network loads during AI training do not follow these patterns. They are more predictable & this allows us to build optimized, cheaper & less power-hungry networks. 5.2 Linking GPUs via proprietary technology like NVLink We can strap together H100’s by leveraging the proprietary NVLink & NVSwitch technologies. NVLink provides the high-speed connection between individual GPUs, while NVSwitch is a chip that enables multiple GPUs to communicate through NVLink, forming a high-bandwidth network. See this nice article for details. NVIDIA’s P100 GPU introduced the NVLink1. At that time there was no NVSwitch chip, and the GPUs were connected in a ring-like configuration, which resulted in a lack of direct point-to-point communication between GPUs. The NVSwitch1 chip was introduced with the V100, followed by the NVSwitch2 chip with the A100 GPU. We are in the third-generation NVSwitch3 which can support a cluster of up to 256 H100 GPUs. Each H100 GPU in such a cluster is connected to the internal NVSwitch3 chip through 18 NVLink4.0 connections. This is how trillion parameter LLMs are inferenced. 5.3 Linking GPUs via RoCE in a rail-optimized topology But as they say, ye dil mange more… Meta reportedly trains its newer models on a cluster that’s over 100K H100’s. Phew! How to they manage to link it all up? The standard NVLink tricks can only scale to a limited number of GPUs. Beyond that, we have to use the network topologies discussed earlier & fall back on technologies like RoCE, which allows data to be directly transferred from one GPU’s memory to another without involving the CPU. So you have 8 GPUs in one DGX server. You have several such DGX servers in the data centre. Each GPU is assigned a NIC (yes!) & connected via RDMA to all other GPUs thru’ a variant of Clos network called “rail-optimized network”. The idea here is to set up dedicated connections between groups of GPUs with rail switches. If a GPU wants to communicate with a GPU which is in a different group, then it has to go thru’ the spine switch (which takes a lil more time). To implement this, each GPU in a DGX server is indexed serially. A rail is the set of GPUs with the same index on different servers & these are interconnected with a rail switch via RDMA. These rail switches are subsequently connected to spine switches forming any-to-any GPU network. Source: Fig 1 from https://arxiv.org/pdf/2307.12169 This topology streamlines traffic flow. It is like having dedicated lanes for high speed vehicles instead of generally mixing all traffic together. Rail paths are direct connections between a bunch of GPUs with same index. Spine switches serve as the connecting points for differently-indexed GPUs. For e.g., communication between GPU1 of server 1 and GPU1 of server 2 happens via their dedicated rail switch 1. If GPU1 of server 1 needs to reach GPU5 of another server, it has to go thru’ a spine switch. The workloads are designed so as to minimize data transfers across rails (since it has to go thru’ the extra spine switch). The good news is that this can be neatly done for AI training ensuring that most of the traffic stays within the rails, and does not cut across. In fact, there is a recent paper which suggests that you can consider removing costly spine switches altogether as inter-rail communication is minimal. Can you guess how? 5.4 Linking GPUs via RoCE in a rail-only topology Well, we have the superfast connectivity using NVLink to communicate between a limited set of GPUs (upto 256). So you create these High Bandwith (HB) domains which use NVLink for communication. You have several such HB domains. We then have the same indexing system and rail connections to interconnect the HB domains. But there are no spine switches! Can you guess how GPU1 of HB domain 1 can talk to GPU5 of another HB domain? Yes! Transfer data via superfast NVLink to GPU5 of HB domain 1 first. Then use the dedicated rail of GPU5 to talk to the GPU5 in another HB domain! This is a rail-only topology as oppsed to rail-optimized topology! Given these topologies, we can now plan the training pipeline to have pipeline parallelism, tensor parallelism &/or data parallelism but that is a story for another day. See this, this & this for more details. 100K H100’s consume a LOT of power. Tech companies are exploring nuclear power options to generate clean energy needed for long term sustenance. Else, a 100K GPU cluster may have to be broken down to smaller clusters and connected using optical transceivers across the buildings in a campus. This (unplanned) article is a prelude to — Optimizing LLM inference: Key Faultlines & workarounds. To deeply understand how we can optimize LLM operations, we need to understand more about the silicon on which they are executed. Though there are lots of manuals/guides on individual aspects like memory, processors, networking etc, I couldn’t find a concise and reader-friendly thread linking together these various aspects & hence took a shot. This is the 9th of a 15-series article titled My LLM diaries. LLM Quantization — From concepts to implementation LoRA & its newer variants explained like never before In-Context learning: The greatest magic show in the kingdom of LLMs RAG in plain English — Summary of 100+ papers HNSW — Story of the world’s most popular Vector search algorithm VectorDB origins, Vamana & on-disk Vector search algorithms Taming LLMs — A study of few popular techniques Understanding LLM Agents: Concepts, Patterns & Frameworks Anatomy of a GPU — A peek into the hardware fuelling LLM operations Optimizing LLM Inference — Key Faultlines & workarounds LLM Serving — Architecture considerations LLM evaluation & other odds and ends Look Ma, LLMs without Prompt Engineering LLMs on the laptop — A peek into the Silicon Taking a step back — On model sentience, conscientiousness & other philosophical aspects Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI
    0 Σχόλια 0 Μοιράστηκε