• What is RedNote? Everything you need to know about the TikTok alternative
    www.digitaltrends.com
    Table of ContentsTable of ContentsWhats happening?What is RedNote?Is RedNote safe?Does China own RedNote?Should you download RedNote in the U.S.?Are there other TikTok alternatives?When is TikTok being banned?In the U.S., TikTok could soon be removed from the market. In its place, many TikTok users are preemptively turning to a similar app called RedNote and other alternatives.What is RedNote, and is it better than TikTok? Could it also be removed from the U.S. market? We have the answers.Recommended VideosTikTok is facing a potential ban in the U.S. mainly due to concerns regarding its connections to China and the associated national security risks. U.S. officials are worried that ByteDance, TikToks Chinese parent company, could be required by the Chinese government to provide access to American user data, which might then be used for espionage or surveillance. This concern is heightened by a 2017 Chinese law that mandates companies to cooperate with national security investigations.Please enable Javascript to view this contentThere are also worries about content moderation and the possibility of the Chinese government influencing TikToks algorithm to disseminate misinformation or censor content critical of China. Although TikTok asserts that it operates independently and prioritizes protecting U.S. user data, these issues have prompted bipartisan support for banning the social network in the U.S., beginning on Sunday, January 19.XinginRedNote, also known as Xiaohongshu (Little Red Book) in China, is often described as a mix of Instagram, TikTok, and Pinterest. Regarding its similarities with TikTok, RedNote enables users to create and share short videos on various topics, including fashion, beauty, food, travel, and lifestyle.The platform is also particularly well-known for its in-depth product reviews, often featuring videos and photos. Users can share their experiences with various products and services, helping others make informed purchasing decisions.Additionally, RedNote includes a built-in e-commerce platform, allowing users to purchase products directly through the app. This feature makes it easy for users to buy items they see highlighted in videos and reviews.RedNote also offers social networking capabilities, allowing users to follow others, like and comment on posts, and share content with friends.Interestingly, RedNote is older than TikTok; it launched in 2013, three years before TikTok.Whether RedNote is safe to use depends on what aspects youre most concerned about as a user. First, theres no denying that, like TikTok, RedNote is Chinese-based. As such, familiar concerns are being raised about how user data is handled and whether it could be shared with the Chinese government. Adding to the confusion is that the apps privacy policy is in Mandarin and was not written for the U.S. market.Another issue is that RedNote is available everywhere in the same format with the same information, whether the user is in the U.S., China, or elsewhere. In contrast, TikTok is not accessible in China, which may come as a surprise to many.The ability to connect with anyone globally gives RedNote the appearance of taking a for the people approach. However, (again) it remains unclear how involved the Chinese government is in RedNotes operations and its collection of user data.RedNote is owned by Xingyin Information Technology Co., Ltd., based in Shanghai. Therefore, technically, it isnt owned by the Chinese government. However, Chinese law grants the government substantial oversight over all company operations.The 2017 law mentioned above allows the government to request access to user data and requires companies to comply with strict censorship regulations. Additionally, the government can influence company policies and decisions through various means, including direct pressure and financial incentives.Therefore, while RedNote is not state-owned, it operates within the Chinese legal and regulatory framework, which gives the government significant leverage over its operations.Whether or not to download and use RedNote depends on your risk tolerance. On the positive side, RedNotes format closely resembles that of TikTok, particularly regarding short-form videos and focusing on product discovery. It also includes several unique features worth your time, such as emphasizing product reviews.However, there are notable drawbacks. In addition to data privacy concerns, it is important to point out that RedNote currently has minimal English language support, with much of the content available primarily in Mandarin.Given the apps location, its also crucial to consider that, similar to TikTok, RedNote could potentially face a ban in the U.S. This would undermine the long-term benefits of using the app in case of a TikTok ban.Several apps are not based in China that you might consider as alternatives to TikTok. While none of these apps are exactly like TikTok, they offer similar features. Popular options include Snapchat, Instagram Reels, YouTube Shorts, and less familiar ones like Triller.Additionally, you may have heard of Lemon8 as a potential TikTok alternative. However, its important to note that Lemon8 is also owned by ByteDance, which means it could also face a ban.ByteDance has announced it will comply with U.S. law and remove TikTok from the U.S. market on Sunday. However, the specifics of how this removal will be executed remain unclear. For instance, one possibility is that ByteDance could take down the TikTok app from the U.S. App Store and Google Play while allowing existing users continued access to their accounts and content until they choose to delete the app themselves.Nonetheless, it is more probable that starting on Sunday, ByteDance will enact a complete restriction on U.S. users access to the service. This means that users may find themselves unable to log in or use the platform altogether, leading to a significant disruption in the apps user base and community engagement, particularly given TikToks popularity among younger audiences.In the meantime, there exists the potential for some factions within the U.S. government to intervene and either block the ban, at least temporarily or amend the law that mandates the prohibition, thereby allowing TikTok to remain operational in the country. Such intervention could center around the apps economic impact, its role in social connectivity, and the broader implications for digital privacy and user freedoms.Additionally, ByteDance might explore the option of divesting TikTok to a buyer outside of China, possibly nurturing a sale to American or European interests before the deadline. This could involve negotiations that address the necessary regulatory concerns and ensure that user data is protected from international scrutiny, thereby enabling the app to continue serving its users under new ownership while adhering to U.S. laws. Such a step could reshape the social media landscape and influence competition with other platforms.Theres one final wild card: Donald J. Trump. The former president will be inaugurated as the 47th president of the United States just hours after TikTok is scheduled to be banned. He has previously expressed a preference for the ban to not take effect until after he takes office so he can more effectively assess the situation. While this is a possibility, it is certainly not a likely outcome that his position could sway the U.S. Supreme Court or Congress to reconsider and halt the ban.Instead of finding a TikTok alternative, you could use a VPN to continue to use the app in the U.S.Editors Recommendations
    0 Σχόλια ·0 Μοιράστηκε ·36 Views
  • Meta takes us a step closer to Star Treks universal translator
    arstechnica.com
    Can it handle metaphors? Meta takes us a step closer to Star Treks universal translator The computer science behind translating speech from 100 source languages. Jacek Krywko Jan 15, 2025 11:00 am | 64 Interpreters work during the 76th Session of the United Nations General Assembly on September 21, 2021 in New York, United States. Credit: Liao Pan/China News Service via Getty Images Interpreters work during the 76th Session of the United Nations General Assembly on September 21, 2021 in New York, United States. Credit: Liao Pan/China News Service via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didnt speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhikers Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.What this team came up with was a next-gen translation system called Seamless. The first building block of this system is described in Wednesdays issue of Nature; it can translate speech among 36differentlanguages.Language data problemsAI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet. Institutions like the United Nations or European Parliament routinely translate all their proceedings into the languages of all their member states, which means there are enormous databases comprising aligned documents prepared by professional human translators. You just needed to feed those huge, aligned text corpora into neural nets (or hidden Markov models before neural nets became all the rage) and you ended up with a reasonably good machine translation system. But there were two problems with that.The first issue was those databases comprised formal documents, which made the AI translators default to the same boring legalese in the target language even if you tried to translate comedy. The second problem was speechnone of this included audio data.The problem of language formality was mostly solved by including less formal sources like books, Wikipedia, and similar material in AI training databases. The scarcity of aligned audio data, however, remained. Both issues were at least theoretically manageable in high-resource languages like English or Spanish, but they got dramatically worse in low-resource languages like Icelandic or Zulu.As a result, the AI translators we have today support an impressive number of languages in text, but things are complicated when it comes to translating speech. There are cascading systems that simply do this trick in stages. An utterance is first converted to text just as it would be in any dictation service. Then comes text-to-text translation, and finally the resulting text in the target language is synthesized into speech. Because errors accumulate at each of those stages, the performance you get this way is usually poor, and it doesnt work in real time.A few systems that can translate speech-to-speech directly do exist, but in most cases they only translate into English and not in the opposite way. Your foreign language interlocutor can say something to you in one of the languages supported by tools like Googles AudioPaLM, and they will translate that to English speech, but you cant have a conversation going both ways.So, to pull off the Star Trek universal translator thing Metas interviewees dreamt about, the Seamless team started with sorting out the data scarcity problem. And they did it in a quite creative way.Building a universal languageWarren Weaver, a mathematician and pioneer of machine translation, argued in 1949 that there might be a yet undiscovered universal language working as a common base of human communication. This common base of all our communication was exactly what the Seamless team went for in its search for data more than 70 years later. Weavers universal language turned out to be mathmore precisely, multidimensional vectors.Machines do not understand words as humans do. To make sense of them, they need to first turn them into sequences of numbers that represent their meaning. Those sequences of numbers are numerical vectors that are termed word embeddings. When you vectorize tens of millions of documents this way, youll end up with a huge multidimensional space where words with similar meaning that often go together, like tea and coffee, are placed close to each other. When you vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces, and then you can run a neural net to learn how those two spaces map onto each other.But the Meta team didnt have those nicely aligned texts for all the languages they wanted to cover. So, they vectorized all texts in all languages as if they were just a single language and dumped them into one embedding space called SONAR (Sentence-level Multimodal and Language-Agnostic Representations). Once the text part was done, they went to speech data, which was vectorized using a popular W2v (word to vector) tool and added it to the same massive multilingual, multimodal space. Of course, each embedding carried metadata identifying its source language and whether it was text or speech before vectorization.The team just used huge amounts of raw datano fancy human labeling, no human-aligned translations. And then, the data mining magic happened.SONAR embeddings represented entire sentences instead of single words. Part of the reason behind that was to control for differences between morphologically rich languages, where a single word may correspond to multiple words in morphologically simple languages. But the most important thing was that it ensured that sentences with similar meaning in multiple languages ended up close to each other in the vector space.It was the same story with speech, tooa spoken sentence in one language was close to spoken sentences in other languages with similar meaning. It even worked between text and speech. So, the team simply assumed that embeddings in two different languages or two different modalities (speech or text) that are at a sufficiently close distance to each other are equivalent to the manually aligned texts of translated documents.This produced huge amounts of automatically aligned data. The Seamless team suddenly got access to millions of aligned texts, even in low-resource languages, along with thousands of hours of transcribed audio. And they used all this data to train their next-gen translator.Seamless translationThe automatically generated data set was augmented with human-curated texts and speech samples where possible and used to train multiple AI translation models. The largest one was called SEAMLESSM4T v2. It could translate speech to speech from 101 source languages into any of 36 output languages, and translate text to text. It would also work as an automatic speech recognition system in 96 languages, translate speech to text from 101 into 96 languages, and translate text to speech from 96 into 36 languagesall from a single unified model. It also outperformed state-of-the-art cascading systems by 8 percent in a speech-to-text and by 23 percent in a speech-to-speech translations based on the scores in Bilingual Evaluation Understudy (an algorithm commonly used to evaluate the quality of machine translation).But it can now do even more than that. The Nature paper published by Metas Seamless ends at the SEAMLESSM4T models, but Nature has a long editorial process to ensure scientific accuracy. The paper published on January 15, 2025, was submitted in late November 2023. But in a quick search of the arXiv.org, a repository of not-yet-peer-reviewed papers, you can find the details of two other models that the Seamless team has already integrated on top of the SEAMLESSM4T: SeamlessStreaming and SeamlessExpressive, which take this AI even closer to making a Star Trek universal translator a reality.SeamlessStreaming is meant to solve the translation latency problem. The baseline SEAMLESSM4T, despite all the bells and whistles, worked as a standard AI translation tool. You had to say what you wanted to say, push translate, and it spat out the translation. SeamlessStreaming was designed to take this experience a bit closer to what human simultaneous translator doit translates what youre saying as you speak in a streaming fashion. SeamlessExpressive, on the other hand, is aimed at preserving the way you express yourself in translations. When you whisper or say something in a cheerful manner or shout out with anger, SeamlessExpressive will encode the features of your voice, like tone, prosody, volume, tempo, and so on, and transfer those into the output speech in the target language.Sadly, it still cant do both at the same time; you can only choose to go for either streaming or expressivity, at least at the moment. Also, the expressivity variant is very limited in supported languagesit only works in English, Spanish, French, and German. But at least its online so you can go ahead and give it a spin.Nature, 2025. DOI: 10.1038/s41586-024-08359-zJacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 64 Comments
    0 Σχόλια ·0 Μοιράστηκε ·50 Views
  • A Ukrainian F-16 pilot's unprecedented shootdown of 6 missiles in a single mission shows how its air force has evolved
    www.businessinsider.com
    Ukraine said one of its F-16 pilots downed a record-breaking six cruise missiles in one mission.That shows how much Ukraine's air force has developed, a former US F-16 pilot told BI.All of its systems had to work well, and it showed how Ukraine is fighting more like the West.A Ukrainian pilot's record-breaking shootdown of six missiles with an F-16 offers insight into how much its air force has developed as it fights back against Russia's invasion.Throughout much of the war, Ukraine's air force faced one of the world's biggest air forces with a fleet of older, Soviet-designed combat aircraft while begging the West for F-16s readily available in NATO arsenals.The US, however, refused to allow the transfer, even as other allies pushed to give Ukraine the aircraft. Washington felt they would arrive too late, that training would take too long, and the jets could prompt Russian escalation. But it eventually relented.Early usage of the aircraft in combat saw the loss of an airframe and the Ukrainian pilot, raising questions about how much of an impact the jets could make.But Ukraine's assertion that one of its F-16 pilots downed six Russian cruise missiles in one mission which it said is a record for the American-made fighter jet shows how much Ukraine's air force has developed, a former American F-16 pilot told Business Insider.Responding to missile threats requires coordination and quick reaction. Ret. Col. John Venable, a 25-year veteran of the US Air Force and a former F-16 pilot, told BI the pilot being alert, able to get a notification, and get out in time to intercept all of those missiles "says a lot" about "the capabilities are of the Ukrainian Air Force." Ukrainian President Volodymyr Zelenskyy stands against the background of Ukraine's Air Force's F-16 fighter jets in an undisclosed location in Ukraine. AP Photo/Efrem Lukatsky The reported intercept spoke to "their ability to actually detect" cruise missiles and "then scramble fighters in order to successfully intercept them." he said. Cruise missiles do not fire back like a Russian jet would, but it was a very impressive showing of Ukraine's air force.Responding like this was "no simple task," Venable said, which required all of Ukraine's command and control systems, as well as its sensors and radars, to work together. He said that to "actually find, fix and engage threats that are inbound to your nation, that says a lot about their command and control."Fighting like the WestVenable said the event shows how much Ukraine has been fighting like the West does.He said Russia's "command and control apparatus is basically scripted," which means they have an issue letting pilots "go out and actually do what you are required to do without someone doing a puppeteer thing over the top of you."The Ukrainian F-16 pilot pulling off what Ukraine says they did "says a lot about how far the Ukrainians have come" from their Soviet start and that "scheme of close control."Peter Layton, a fellow at the Griffith Asia Institute and a former Royal Australian Air Force officer, told BI the intercept showed the pilot had "good training" since he was "able to react quickly to a changing situation.""Russian pilots have a reputation of needing to receive orders from their ground controllers,'" he said. This event demonstrates Ukrainian pilots "have adopted Western methods of operating both independently and aggressively when the situation is right." A US Air Force F-16. US Air Force photo/ Senior Airman Rachel Pakenas For instance, Ukraine said the pilot, who said he was out of missiles and short on fuel, made a quick decision to keep fighting, pursuing two more of the Russian missiles with guns, a riskier engagement requiring control of the plane and confidence a safe airfield was nearby.Ukraine, generally, has adopted a more Western style of fighting, with individuals and leaders making quick decisions away from the central command. But Russia, though it has been learning, has been hampered by not delegating such responsibility, making it slower to respond to battlefield developments and even losing commanders as a result.Ukraine's F-16 pilots have received training from a coalition of countries, including the Netherlands, Canada, Denmark, the US, and Romania.The exchange is not one-sided. While many of Ukraine's soldiers have received training from Western allies, those allies say Ukraine is teaching them about tactics and how to fight Russia, too.Western officials and warfare experts say Ukraine's tactics and successes reveal lessons that the West should learn for fighting Russia.These lessons have been something of a trade-off as the West provides more gear and as Ukraine signs agreements with countries like the UK, Denmark, and France, with the war showing vulnerabilities in systems and tactics.The Westernization of Ukraine's army aids its ambition to join NATO, an uncertainty while the country is at war with Russia and a question in the aftermath.A small air forceBefore Russia's full-scale invasion, some expected Ukraine's air force would be immediately destroyed in a war with Russia.Russia attempted to wipe out Ukraine's air force at the start but failed, with Ukraine able to disperse many jets and keep them intact. Those surviving aircraft have played key roles in its defense, even as the skies remain heavily contested. A Ukrainian Air Force F-16 fighter jet flies in an undisclosed location in Ukraine. AP Photo/Efrem Lukatsky Ukraine's air force is expanding and becoming more Western with the arrival of F-16s and a pledge from France to send Mirage aircraft.Warfare experts say Ukraine has nowhere near enough F-16s to make a difference against Russia, and the few it does have are older versions, less powerful than what many allies have and Russia's best jets. Ukraine appears to be using its few F-16s primarily to help its air defenses battle missile threats rather than sending them on risky missions against Russian jets or critical ground targets.The Ukrainian jets, 50-year-old aircraft made by Lockheed Martin, typically fly with a loadout of four air-to-air missiles and are equipped with bolt-on self-defense pylons for detecting incoming missiles.Venable said the air-defense mission has met his expectations for how Ukraine would use them.Ukraine, Venable said, does not have enough F-16s, nor does it have the support systems or upgrades, to be able to use them aggressively to change the shape of the war.Ukraine's air force is not perfect, Venable said. But the progress so far is clear. "As far as being able to intercept inbound missiles and being able to engage them, this says a lot about their capabilities."
    0 Σχόλια ·0 Μοιράστηκε ·38 Views
  • "I Am Disappointed in Architects" Shigeru Ban on Socially Conscious Architecture in Louisiana Channel Interview
    www.archdaily.com
    "I Am Disappointed in Architects" Shigeru Ban on Socially Conscious Architecture in Louisiana Channel InterviewIn a recent interview with Louisiana Channel, acclaimed Japanese architect Shigeru Ban shared his perspectives on architecture, his journey in the field, and his dedication to socially responsible design. Known for his innovative use of materials such as paper and timber, Ban has spent much of his career creating solutions for disaster-stricken communities and displaced populations around the world.Save this picture!Ban's interest in architecture began at an early age, inspired by his admiration for carpenters. However, it was during his preparation for art university that he was introduced to the work of John Hejduk, which motivated him to study at the Cooper Union School of Architecture in New York. Over the years, Ban's career has evolved from designing for private clients to addressing global issues through humanitarian projects.Save this picture! I always look for the problem to solve by design. So if I'm given an unlimited budget with huge flat space or land, I don't know what to do. I always try to look for some good advantage and disadvantage of the condition to use as a reference to design. -- Shigeru Ban Related Article From Paper Tube Shelters to Timber Innovations: Shigeru Ban's Complete Works Explored by Philip Jodidio for Taschen One of Ban's most significant contributions to the field is his development of paper tube structures, which he began exploring in the 1980s. These inexpensive, lightweight, and widely available materials became a cornerstone of his work in designing temporary shelters for those affected by natural disasters. His first major project in this realm was for Rwandan refugees in 1994, where he partnered with the United Nations High Commissioner for Refugees to create shelters using paper tubes. Since then, his efforts have included projects for evacuees in Turkey and Syria, as well as affordable housing solutions for Ukrainian refugees.Save this picture! I always try to take advantage of the context, the location of the building. I like to analyze what is existing to take advantage of the context into the architecture and try to combine inside and outside. -- Shigeru Ban Ban also discusses his approach to architecture, emphasizing the importance of context and resourcefulness in design. He shared that he avoids being influenced by trends, instead focusing on solving problems specific to each project's conditions. This approach extends beyond humanitarian projects to his work on cultural and public buildings, such as the Centre Pompidou-Metz in France and the Swatch/Omega Campus in Switzerland, where he incorporates sustainable materials and adaptable designs.Save this picture!While Ban's work often aligns with environmental principles, he has stated that sustainability is not his primary motivation. Instead, he focuses on minimizing waste and developing practical, efficient solutions to architectural challenges. Ban's career and philosophy are now the subject of a comprehensive monograph, Shigeru Ban: Complete Works 1985Today, authored by Philip Jodidio and published by Taschen. In fact, the edition highlights Ban's diverse body of work, from disaster relief shelters to innovative public buildings, offering insights into his design process and the evolution of his practice.Save this picture!Interviews offer valuable insights into the minds of architects, shedding light on their creative processes, philosophies, and the broader societal impacts of their work. In other similar news, British architect David Chipperfield discusses the challenges and inspirations of living by the sea in his Louisiana Channel interview, describing the environment as both "unforgiving and tough." Similarly, Ali Karimi of Civil Architecture reflects on "creating architecture in an uncivil time," emphasizing the role of architects in addressing sociopolitical tensions through design. In a conversation with Colectivo C733, winners of the 2024 Obel Award, the team highlights their belief that "architecture is a work of generosity," focusing on projects that foster community and inclusivity.Image gallerySee allShow lessAbout this authorNour FakharanyAuthorCite: Nour Fakharany. ""I Am Disappointed in Architects" Shigeru Ban on Socially Conscious Architecture in Louisiana Channel Interview" 16 Jan 2025. ArchDaily. Accessed . <https://www.archdaily.com/1025839/i-am-disappointed-in-architects-shigeru-ban-on-socially-conscious-architecture-in-louisiana-channel-interview&gt ISSN 0719-8884Save!ArchDaily?You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Σχόλια ·0 Μοιράστηκε ·51 Views
  • Ideas: AI for materials discovery with Tian Xie and Ziheng Lu
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]TIAN XIE: Yeah,ZIHENG LU: Previously, a lot of people are using this atomistic simulator and this generative models alone. But if you think about it, now that we have these two foundation models together, it really can make things different, right. You have a very good idea generator. And you have a very good goalkeeper. And you put them together. They form a loop. And now you can use this loop to design materials really quickly.[TEASER ENDS]LINDSAY KALTER: Youre listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]Im your guest host, Lindsay Kalter. Today Im talking to Microsoft Principal Research Manager Tian Xie and Microsoft Principal Researcher Ziheng Lu. Tian is doing fascinating work with MatterGen, an AI tool for generating new materials guided by specific design requirements. Ziheng is one of the visionaries behind MatterSim, which puts those new materials to the test through advanced simulations. Together, theyre redefining whats possible in materials science. Tian and Ziheng, welcome to the podcast.TIAN XIE: Very excited to be here.ZIHENG LU: Thanks, Lindsay, very excited.KALTER: Before we dig into the specifics of MatterGen and MatterSim, lets give our audience a sense of how you, as researchers, arrived at this moment. Materials science, especially at the intersection of computer science, is such a cutting-edge and transformative field. What first drew each of you to this space? And what, if any, moment or experience made you realize this was where you wanted to innovate? Tian, do you want to start?XIE: So I started working on AI for materials back in 2015, when I started my PhD. So I come as a chemist and materials scientist, but I was, kind of, figuring out what I want to do during my PhD. So there is actually one moment really drove me into the field. That was AlphaGo. AlphaGo was, kind of, coming out in 2016, where it was able to beat the world champion in go in 2016. I was extremely impressed by that because I, kind of, learned how to do go, like, in my childhood. I know how hard it is and how much effort those professional go players have spent, right, in learning about go. So I, kind of, have the feeling that if AI can surpass the world-leading go players, one day, it will too surpass material scientists, right, in their ability to design novel materials. So thats why I ended up deciding toLU: Thats very interesting, Tian. So, actually, I think I started, like, two years before you as a PhD student. So I, actually, I was trained as a computational materials scientist solely, not really an AI expert. But at that time, the computational materials science did not really work that well. It works but not working that well. So after, like, two or three years, I went back to experiments for, like, another two or three years because, I mean, the experiment is always the gold standard, right. And I worked on this experiments for a few years, and then about three years ago, I went back to this field of computation, especially because of AI. At that time, I think GPT and these large AI models that currently were using is not there, but we already have their prior forms like BERT, so we see the very large potential of AI. We know that these large AIs might work. So one idea is really to use AI to learn the entire space of materials and really grasp the physics there, and that really drove me to this field and thats why Im here working on this field, yeah.KALTER: Were going to get into what MatterGen and MatterSim mean for materials sciencethe potential, the challenges, and open questions. But first, give us an overview of what each of these tools are, how they do what they do, andas this show is about big ideasthe idea driving the work. Ziheng, lets have you go first.LU: So MatterSim is a tool to do in silico characterizations of materials. If you think about working on materials, you have several steps. You first need to synthesize it, and then you need to characterize this. Basically, you need to know what property, what structures, whatever stuff about these materials. So for MatterSim, what we want to do is to really move the characterization process, a lot of these processes, into using computations. So the idea behind MatterSim is to really learn the fundamentals of physics. So we learn the energies and forces and stresses from these atomic structures and the charge densities, all of these things, and then with these, we can really simulate any sort of materials using our computational machines. And then with these, we can really characterize a lot of these materials properties using our computer, that is very fast. Its much faster than we do experiments so that we can accelerate the materials design. So just in a word, basically, you input your material into your computer, a structure into your computer, and MatterSim will try to simulate these materials like what you do in a furnace or with an XRDKALTER: All right, thank you very much. Tian, why dont you tell us about MatterGen?XIE: Yeah, thank you. So, actually, Ziheng, once you start with explaining MatterSim, it makes it much easier for me to explain MatterGen. So MatterGen actually represents a new way to design materials with generative AI. Material discovery is like finding needles in a haystack. Youre looking for a material with a very specific property for a material application. For example, like finding a room-temperature superconductor or finding a solid that can conduct a lithium ion very well inside a battery. So its like finding one very specific material from a million, kind of, candidates. So the conventional way of doing material discovery is via screening, where you, kind of, go over millions of candidates to find the one that youre looking for, where MatterSim is able to significantly accelerate that process by making the simulation much faster. But its still very inefficient because you need to go through this million candidates, right. So with MatterGen, you can, kind of, directly generate materials given the prompts of the design requirements for the application. So this means that you can discover materialsdiscover useful materials much more efficiently. And it also allows us to explore a much larger space beyond the set of known materials.KALTER: Thank you, Tian. Can you tell us a little bit about how MatterGen and MatterSim work together?XIE: So you can really think about MatterSim and MatterGen accelerating different parts of materials discovery process. MatterSim is trying to accelerate the simulation of material properties, while MatterGen is trying to accelerate the search of novel material candidates. It means that they can really work together as a flywheel and you can compound the acceleration from both models. They are also both foundation AI models, meaning they can both be used for a broad range of materials design problems. So were really looking forward to see how they can, kind of, working together iteratively as a tool to design novel materials for a broad range of applications.LU: I think thats a very good, like, general introduction of how they work together. I think I can provide an example of how they really fit together. If you want a material with a specific, like, bulk modulus or lithium-ion conductivity or thermal conductivity for your CPU chips, so basically what you want to do is start with a pool of material structures, like some structures from the database, and then you compute or you characterize your wanted property from that stack of materials. And then what you do, youve got these properties and structure pairs, and you input these pairs into MatterGen. And MatterGen will be able to give you a lot more of these structures that are highly possible to be real. But the number will be very large. For example, for the bulk modulus, I dont remember the number we generated in our work was that like thousands, tens of thousands?XIE: Thousands, tens of thousands.LU: Yeah, that would be a very large number pool even with MatterGen, so then the next step will be, how would you like to screen that? You cannot really just send all of those structures to a lab to synthesize. Its too much, right. Thats when MatterSim again comes in. So MatterSim comes in and screen all those structures again and see which ones are the most likely to be synthesized and which ones have the closest property you wanted. And then after screening, you probably get five, 10 top candidates and then you send to a lab. Boom, everything goes down. Thats it.KALTER: Im wondering if theres any prior research or advancements that you drew from in creating MatterGen and MatterSim. Were there any specific breakthroughs that influenced your approaches at all?LU: Thanks, Lindsay. I think Ill take that question first. So interestingly for MatterSim, a very fundamental idea was drew from Chi Chen, who was a previous lab mate of mine and now also works for Microsoft at Microsoft Quantum. He made this fantastic model named M3GNet, which is a prior form of a lot of these large-scale models for atomistic simulations. That model, M3GNet, actually resolves the near ground state prediction problem. I mean, the near ground state problem sounds like a fancy but not realistic word, but what that actually means is that it can simulate materials at near-zero covalent states. So basically at very low temperatures. So at that time, we were thinking since the models are now able to simulate materials at their near ground states, its not a very large space. But if you also look at other larger models, like GPT whatever, those models are large enough to simulate entire human language. So its possible to really extend the capability from these such prior models to very large space. Because we believe in the capability of AI, then it really drove us to use MatterSim to learn the entire space of materials. I mean, the entire space really means the entire periodic table, all the temperatures and the pressures people can actually grasp.XIE: Yeah, I still remember a lot of the amazing works from Chi Chen whenever were, kind of, back working on property-prediction models. So, yeah, so the problem of generating materials from properties is actually a pretty old one. I still remember back in 2018, when I was, kind of, working on CGCNN (crystal graph convolutional neural networks) and giving a talk about property-prediction models, right, one of the first questions people asked is, OK, can you inverse this process? Instead of going from material structure to properties, can you, kind of, inversely generate the materials directly from their property conditions? So in a way, this is, kind of, like a dream for material scientistssome people even call it, like, holy grailbecause, like, the end goal is really about finding materials property, right, [that] will satisfy your application. So Ive been, kind of, thinking about this problem for a while and also there has been a lot of work, right, over the past few years in the community to build a generative model for materials. A lot of people have tried before, like 2020, using ideas like VAEs or GANs. But its hard to represent materials in this type of generative model architecture, and many of those models generated relatively poor candidates. So I thought it was a hard problem. I, kind of, know it for a while. But there is no good solutions back then. So I started to focus more on this problem during my postdoc, when I studied that in 2020 and I keep working on that in 2021. At the beginning, I wasnt really sure exactly what approach to take because its, kind of, like open question and really tried a lot of random ideas. So one day actually in my group back then with Tommi Jaakkola and Regina Barzilay at MITs CSAIL (Computer Science & Artificial Intelligence Laboratory), we, kind of, get to know this method called diffusion model. It was a very early stage of a diffusion model back then, but it already began to show very promising signs, kind of, achieving state of art in many problems like 3D point cloud generation and the 3D molecular conformer generation. So the work that really inspired me a lot is two works that was for molecular conformer generation. One is ConfGF, and one is GeoDiff. So they, kind of, inspired me to, kind of, focus more on diffusion models. That actually lead to CDVAE (crystal diffusion variational autoencoder). So its interesting that we, kind of, spend like a couple of weeks in trying all this diffusion idea, and without that much work, it actually worked quite out of box. And at that time, CDVAE achieves much better performance than any previous models in materials generation, and were, kind of, super happy with that. So after CDVAE, I, kind of, joined Microsoft, now working with more people together on this problem of generative model for materials. So we, kind of, know what the limitations of CDVAE are, is that it can do unconditional material generation well means it can generate novel material structures, but it is very hard to use CDVAE to do property-guided generations. So basically, it uses an architecture called a variational autoencoder, where you have a latent space. So the way that you do property-guided generation there was to do a, kind of, a gradient update inside the latent space. But because the latent space wasnt learned very well, so it actually you cannot do, kind of, good property-guided generation. We only managed to do energy-guided generation, but it wasnt successful in going beyond energy. So that comes us to really thinking, right, how can we make the property-guided generation much better? So I remember like one day, actually, my colleague, Daniel Zgner, who actually really showed me this blog which basically explains this idea of classifier-free guidance, which is the powerhouse behind the text-image generative models. And so, yeah, then we began to think about, can we actually make the diffusion model work for classifier-free guidance? That lead us to remove the, kind of, the variational autoencoder component from CDVAE and begin to work on a pure diffusion architecture. But then there was, kind of, a lot of development around that. But it turns out that classifier-free guidance is the key really to make property-guided generation work, and then combined with a lot more effort in, kind of, improving architecture and also generating more data and also trying out all these different downstream tasks that end up leading into MatterGen as we see today.KALTER: Yeah, I think youve both done a really great job of explaining how MatterGen and MatterSim work together and how MatterGen can offer a lot in terms of reducing the amount of time and work that goes into finding new materials. Tian, how does the process of using MatterGen to generate materials translate into real-world applications?XIE: Yeah, thats a fantastic question. So one way that I think about MatterGen, right, is that you can think about it as like a copilot for materials scientists, right. So they can help you to come up with, kind of, potential good hypothesis for the materials design problems that youre looking for. So say youre trying to design a battery, right. So you may have some ideas over, OK, what candidates you want to make, but this is, kind of, based on your own experience, right. Depths of experience as a researcher. But MatterGen is able to, kind of, learn from a very broad set of data, so therefore, it may be able to come up with some good suggestions, even surprising suggestions, for you so that you can, kind of, try this out, right, both with computation or even one day in wet lab and experimentally synthesize it. But I also want to note that this, in a way, this is still an early stage in generative AI for materials means that I dont expect all the candidates MatterGen generates will be, kind of, suits your needs, right. So you still need to, kind of, look into them with expertise or with some kind of computational screening. ButKALTER: I want to pivot a little bit to the MatterSim side of things. I know identifying new combinations of compounds is key to meeting changing needs for things like sustainable materials. But testing them is equally important to developing materials that can be put to use. Ziheng, how does MatterSim handle the uncertainty of how materials behave under various conditions, and how do you ensure that the predictions remain robust despite the inherent complexity of molecular systems?LU: Thanks. Thats a very, very good question. So uncertainty quantification is a key to make sure all these predictions and simulations are trustworthy. And thats actually one of the questions we got almost every time after a presentation. So people will ask, wellespecially those experimentalistswould ask, well, Ive been using your model; how do I know those predictions are true under the very complex conditions Im using in my experiments? So to understand how we deal with uncertainty, we need to know how MatterSim really functions in predicting an arbitrary property, especially under the condition you want, like the temperature and pressure. That would be quite complex, right? So in the ideal case, we would hope that by using MatterSim, you can directly simulate the properties you want using molecular dynamics combined with statistical mechanics. So if so, it would be easy to really quantify the uncertainty because there are just two parts: the error from the model and the error from the simulations, the statistical mechanics. So the error from the model will be able to be measured by, what we call, an ensemble. So basically you start with different random seeds when you train the model, and then when you predict your property, you use several models from the ensemble and then you get different numbers. If the variance from the numbers are very large, youll say the prediction is not that trustworthy. But a lot of times, we will see the variance is very small. So basically, an ensemble of several different models will give you almost exactly the same number; youre quite sure that the number is somehow very, like, useful. So thats one level of the way we want to get our property. But sometimes, its very hard to really directly simulate the property you want. For example, for catalytic processes, its very hard to imagine how you really get those coefficients. Its very hard. The process is just too complicated. So for that process, what we do is to really use the, what we call, embeddings learned from the entire material space. So basically that vector we learned for any arbitrary material. And then start from that, we build a very shallow layer of a neural network to predict the property, but that also means you need to bring in some of your experimental or simulation data from your side. And for that way of predicting a property to measure the uncertainty, its still like the two levels, right. So we dont really have the statistical error anymore, but what we have is, like, only the model error. So you can still stick to the ensemble, and then it will work, right. So to be short, so MatterSim can provide you an uncertainty to make sure the prediction tells you whether its true or not.KALTER: So in many ways, MatterSim is the realist in the equation, and its there to sort of be a gatekeeper for MatterGen, which is the idea generator.XIE: I really like the analogy.LU: Yeah.KALTER: As is the case with many AI models, the development of MatterGen and MatterSim relies on massive amounts of data. And here you use a simulation to create the needed training data. Can you talk about that process and why youve chosen that approach, Tian?XIE: So one advantage here is that we can really use large-scale simulation to generate data. So we have a lot of compute here at Microsoft on our Azure platform, right. So how we generate the data is that we use a method called density functional theory, DFT, which is a quantum mechanical method. And we use a simulation workflow built on top with DFT to simulate the stability of materials. So what we do is that we curate a huge amount of material structures from multiple different sources of open data, mostly including Materials Project and Alexandria database, and in total, there are around 3 million materials candidates coming from these two databases. But not all of these structures, they are stable. So therefore, we try to use DFT to compute their stability and try to filter down the candidates such that we are making sure that our training data only have the most stable ones. This leads into around 600,000 training data, which was used to train the base model of MatterGen. So I want to note that actually we also use MatterSim as part of the workflow because MatterSim can be used to prescreen unstable candidates so that we dont need to use DFT to compute all of them. I think at the end, we computed around 1 million DFT calculations where two-thirds of them, they are already filtered out by MatterSim, which saves us a lot of compute in generating our training data.LU: Tian, you have a very good description of how we really get those ground state structures for the MatterGen model. Actually, weve been also using MatterGen for MatterSim to really get the training data. So if you think about the simulation space of materials, its extremely large. So we would think it in a way that it has three axis, so basically the elements, the temperature, and the pressure. So if you think about existing databases, they have pretty good coverage of the elements space. Basically, we think about Materials Project, NOMAD, they really have this very good coverage of lithium oxide, lithium sulfide, hydrogen sulfide, whatever, those different ground-state structures. But they dont really tell you how these materials behave under certain temperature and pressure, especially under those extreme conditions like 1,600 Kelvin, which you really use to synthesize your materials. Thats where we really focused on to generate the data for MatterSim. So its really easy to think about how we generate the data, right. You put your wanted material into a pressure cooker, basically, molecular dynamics; it can simulate the materials behavior on the temperature and pressure. So thats it. Sounds easy, right? But thats not true because what we want is not one single material. What we want is the entire material space. So that will be making the effort almost impossible because the space is just so large. So thats where we really develop this active learning pipeline. So basically, what we do is, like, we generate a lot of these structures for different elements and temperatures, pressures. Really, really a lot. And then what we do is, like, we ask the active learning or the uncertainty measurements to really say whether the model knows about this structure already. So if the model thinks, well, I think I know the structure already. So then, we dont really calculate this structure using density function theory, as Tian just said. So this will really save us like 99% of the effort in generating the data. So in the end, by combining this molecular dynamics, basically pressure cooker, together with active learning, we gathered around 17 million data for MatterSim. So that was used to train the model. And now it can cover the entire periodic table and a lot of temperature and pressures.KALTER: Thank you, Ziheng. Now, Im sure this is not news to either one of you, given that youre both at the forefront of these efforts, but there are a growing number of tools aimed at advancing materials science. So what is it about MatterGen and MatterSim in their approach or capabilities that distinguish them?XIE: Yeah, I think I can start. So I think there is, in the past one year, there is a huge interest in building up generative AI tools for materials. So we have seen lots and lots of innovations from the community published in top conferences like NeurIPS, ICLR, ICML, etc. So I think what distinguishes MatterGen, in my point of view, are two things. First is that we are trained with a very big dataset that we curated very, very carefully, and we also spent quite a lot of time to refining our diffusion architecture, which means that our model is capable of generating very, kind of, high-quality, highly stable and novel materials. We have some kind of bar plot in our paper showcasing the advantage of our performance. I think thats one key aspect. And I think the second aspect, which in my point of view is even more important, is that it has the ability to do property-guided generation. Many of the works that we saw in the community, they are more focused on the problem of crystal structure prediction, which MatterGen can also do, but we focus more on really property-guided generation because we think this is one of the key problems that really materials scientists care about. So the ability to do a very broad range of property-guided generationand we have, kind of, both computational and now experimental result to validate thoseI think thats the second strong point for MatterGen.KALTER: Ziheng, do you want to add to that?LU: Yeah, thanks, Lindsay. So on the MatterSim side, I think its really the diverse condition it can handle that makes a difference. Weve been talking about, like, the training data we collected really covers the entire periodic table and also, more importantly, the temperatures from 0 Kelvin to 5,000 Kelvin and the pressures from 0 gigapascal to 1,000 gigapascal. That really covers what humans can control nowadays. I mean, its very hard to go beyond that. If you know anyone [who] can go beyond that, let me know. So that really makes MatterSim different. Like, it can handle the realistic conditions. I think beyond that, I would say the combo between MatterSim and MatterGen really makes these set of tools really different. So previously, a lot of people are using this atomistic simulator and this generative models alone. But if you think about it, now that we have these two foundation models together, they really can make things different, right. So we have predictor; we have the generator; you have a very good idea generator. And you have a very good goalkeeper. And you put them together. They form a loop. And now you can use this loop to design materials really quickly. So I would say to me, now, when I think about it, its really the combo that makes these set of tools different.KALTER: I know that Ive spoken with both of you recently about how theres so much excitement around this, and its clear that were on the precipice of thisas both of you have called ita paradigm shift. And Microsoft places a very strong emphasis on ensuring that its innovations are grounded in reality and capable of addressing real-world problems. So with that in mind, how do you balance the excitement of scientific exploration with the practical challenges of implementation? Tian, do you want to take this?XIE: Yeah, I think this is a very, very important point, because as there are so many hypes around AI that is happening right now, right. We must be very, very careful about the claims that we are making so that people will not have unrealistic expectations, right, over how these models can do. So for MatterGen, were pretty careful about that. Were trying to, basically, were trying to say that this is an early stage of generative AI in materials where this model will be improved over time quite significantly, but you should not say, oh, all the materials generated by MatterGen is going to be amazing. Thats not what is happening today. So we try to be very careful to understand how far MatterGen is already capable of designing materials with real-world impact. So therefore, we went all the way to synthesize one material that was generated by MatterGen. So this material we generated is called tantalum chromium oxide1. So this is a new material. It has not been discovered before. And it was generated by MatterGen by conditioning a bulk modulus equal to 200 gigapascal. Bulk modulus is, like, the compressiveness of the material. So we end up measuring the experimental synthesized material experimentally, and the measured bulk modulus is 169 gigapascal, which is within 20% of error. So this is a very good proof concept, in our point of view, to show that, oh, you can actually give it a prompt, right, and then MatterGen can generate a material, and the material actually have the property that is very close to your target. But its still a proof of concept. And were still working to see how MatterGen can design materials that are much more useful with a much broader range of applications. And Im sure that there will be more challenges we are seeing along the way. But were looking forward to further working with our experimental partners to, kind of, push this further. And also working with MatterSim, right, to see how these two tools can be used to design really useful materials and bringing this into real-world impact.LU: Yeah, Tian, I think thats very well said. Its not really only for MatterGen. For MatterSim, were also very careful, right. So we really want to make sure that people understand how these models really behave under their instructions and understand, like, what they can do and they cannot do. So I think one thing that we really care about is that in the next few, maybe one or two years, we want to really work with our experimental partners to make this realistic materials, like, in different areas so that we can, even us, can really better understand the limitations and at the same time explore the forefront of materials science to make this excitement become true.KALTER: Ziheng, could you give us a concrete example of what exactly MatterSim is capable of doing?LU: Now MatterSim can really do, like, whatever you have on a potential energy surface. So what that means is, like, anything that can be simulated with the energy and forces, stresses alone. So to give you an example, we can compute the first example would be the stability of a material. So basically, you input a structure, and from the energies of the relaxed structures, you can really tell whether the material is likely to be stable, like, the composition, right. So another example would be the thermal conductivity. Thermal conductivity is like a fundamental property of materials that tells you how fast heat can transfer in the material, right. So for MatterSim, it can really simulate how fast this heat can go through your diamond, your graphene, your copper, right. So basically, those are two examples. So these examples are based on energies and forces alone. But there are things MatterSim cannot doat least for now. For example, you cannot really do anything related to electronic structures. So you cannot really compute the light absorption of a semitransparent material. That would be a no-no for now.KALTER: Its clear from speaking with researchers, both from MatterSim and MatterGen, that despite these very rapid advancements in technology, you take very seriously the responsibility to consider the broader implications of the challenges that are still ahead. How do you think about the ethical considerations of creating entirely new materials and simulating their properties, particularly in terms of things like safety, sustainability, and societal impact?XIE: Yeah, thats a fantastic question. So its extremely important that we are making sure that these AI tools, they are not misused. A potential misuse, right, as you just mentioned, is that people begin to use these AI toolsMatterGen, MatterSimto, kind of, design harmful materials. There was actually extensive discussion over how generative AI tools that was originally purposed for drug design can be then misused to create bioweapons. So at Microsoft, we take this very seriously because we believe that when we create new technologies, you must also ensure that the technology is used responsibly. So we have an extensive process to ensure that all of our models respect those ethical considerations. In the meantime, as you mentioned, maybe sustainability and the societal impact, right, so theres a huge amount these AI toolsMatterGen, MatterSimcan do for sustainability because a lot of the sustainability challenges, they are really, at the end, materials design challenges, right. So therefore, I think that MatterGen and MatterSim can really help with that in solving, in helping us to alleviate climate change and having positive societal impact for the broader society.KALTER: And, Ziheng, how about from a simulation standpoint?LU: Yeah, I think Tian gave a very good, like, description. At Microsoft, we are really careful about these ethical, like, considerations. So I would add a little bit on the more, like, the bright side of things. Like, so for MatterSim, like, it really carries out these simulations at atomic scales. So one thing you can think about is really the educational purpose. So back in my bachelor and PhD period, so I would sit, like, at the table and really grab a pen to really deal with those very complex equations and get into those statistics using my pen. Its really painful. But now with MatterSim, these simulation tools at atomic level, what you can do is to really simulate the reactions, the movement of atoms, at atomic scale in real time. You can really see the chemical reactions and see the statistics. So you can get really the feeling, like very direct feeling, of how the system works instead of just working on those toy systems with your pen. I think its going to be a very good educational tool using MatterSim, yeah. Also MatterGen. MatterGen as, like, a generative tool and generating those i.i.d. (independent and identically distributed) distributions, it will be a perfect example to show the students how the Boltzmann distribution works. I think, Tian, you will agree with that, right?XIE: 100%. Yeah, I really, really like the example that Ziheng mentioned about the educational purposes. I still remember, like, when I was, kind of, learning material simulation class, right. So everything is DFT. You, kind of, need to wait for an hour, right, for getting some simulation. Maybe then youll make some animation. Now you can do this in real time. This is, like, a huge step forward, right, for our young researchers to, kind of, gaining a sense, right, about how atoms interact at an atomic level.LU: Yeah, and the results are really, I mean, true; not really those toy models. I think its going to be very exciting stuff.KALTER: And, Tian, Im directing this question to you, even though, Ziheng, Im sure you can chime in, as well. But, Tian, I know that you and I have previously discussed this specifically. I know that you said back in, you know, 2017, 2018, that you knew an AI-based approach to materials science was possible but that even you were surprised by how far the technology has come so fast in aiding this area. What is the status of these tools right now? Are they in use? And if so, who are they available to? And, you know, whats next for them?XIE: Yes, this is a fantastic question, right. So I think for AI generative tools like MatterGen, as I said many times earlier, its still in its early stages. MatterGen is the first tool that we managed to show that generative AI can enable very broad property-guided generation, and we have managed to have experimental validation to show its possible. But it will take more work to show, OK, it can actually design batteries, can design solar cells, right. It can design really useful materials in these broader domains. So this is, kind of, exactly why we are now taking a pretty open approach with MatterGen. We make our code, our training data, and model weights available to the general public. Were really hoping the community can really use our tools to the problem that they care about and even build on top of that. So in terms of what next, I always like to use what happened with generative AI for drugs, right, to kind of predict how generative AI will impact materials. Three years ago, there is a lot of research around generative model for drugs, first coming from the machine learning community, right. So then all the big drug companies begin to take notice, and then there are, kind of, researchers in these drug companies begin to use these tools in actual drug design processes. From my colleague, Marwin Segler, because he, kind of, works together with Novartis in Microsoft and Novartis collaboration, he has been basically telling me that at the beginning, all the chemists in the drug companies, theyre all very suspicious, right. The molecules generated by these generative models, they all look a bit weird, so they dont believe this will work. But once these chemists see one or two examples that actually turns out to be performing pretty well from the experimental result, then they begin to build more trust, right, into these generative AI models. And today, these generative AI tools, they are part of the standard drug discovery pipeline that is widely used in all the drug companies. That is today. So I think generative AI for materials is going through a very similar period. People will have doubts; people will have suspicions at the beginning. But I think in three years, right, so it will become a standard tool over how people are going to design new solar cells, design new batteries, and many other different applications.KALTER: Great. Ziheng, do you have anything to add to that?LU: So actually for MatterSim, we released the model, I think, back in last year, December. I mean, both the weights and the models, right. So were really grateful how much the community has contributed to the repo. And now, I mean, we really welcome the community to contribute more to both MatterSim and MatterGen via our open-source code bases. So, I mean, the community effort is really important, yeah.KALTER: Well, it has been fascinating to pick your brains, and as we close, you know, I know that youre both capable of quite a bit, which you have demonstrated. I know that asking you to predict the future is a big ask, so I wont explicitly ask that. But just as a fun thought exercise, lets fast-forward 20 years and look back. How have MatterGen and MatterSim and the big ideas behind them impacted the world, and how are people better off because of how you and your teams have worked to make them a reality? Tian, you want to start?XIE: Yeah, I think one of the biggest challenges our human society is going to face, right, in the next 20 years is going to be climate change, right, and there are so many materials design problems people need to solve in order to properly handle climate change, like finding new materials that can absorb CO2 from atmosphere to create a carbon capture industry or have a battery materials that is able to do large-scale energy grid storage so that we can fully utilizing all the wind powers and the solar power, etc., right. So if you want me to make one prediction, I really believe that these AI tools, like MatterGen and MatterSim, is going to play a central role in our humans ability to design these new materials for climate problems. So therefore in 20 years, I would like to see we have already solved climate change, right. We have large-scale energy storage systems that was designed by AI that is basically that we have removed all the fossil fuels, right, from our energy production, and for the rest of the carbon emissions that is very hard to remove, we will have a carbon capture industry with materials designed by AI that absorbs the CO2 from the atmosphere. Its hard to predict exactly what will happen, but I think AI will play a key role, right, into defining how our society will look like in 20 years.LU: Tian, very well said. So I think instead of really describing the future, I would really quote a science fiction scene in Iron Man. So basically in 20 years, I will say when we want to really get a new material, we will just sit in an office and say, Well, J.A.R.V.I.S., can you design us a new material that really fits my newest MK 7 suit? That will be the end. And it will run automatically, and we get this auto lab running, and all those MatterGen and MatterSim, these AI models, running, and then probably in a few hours, in a few days, we get the material.KALTER: Well, I think I speak for many people from several industries when I say that I cannot wait to see what is on the horizon for these projects. Tian and Ziheng, thank you so much for joining us on Ideas. Its been a pleasure.[MUSIC]XIE: Thank you so much.LU: Thank you.[MUSIC FADES]
    0 Σχόλια ·0 Μοιράστηκε ·36 Views
  • Neolithic Europeans sacrificed stones to beg the sun to return
    www.popsci.com
    Two sun stones, which are small flat shale pieces with finely incised patterns and sun motifs. They are known only from the Danish island of Bornholm in the Baltic Sea. National Museum of DenmarkShareFor all Earths roughly 4.5 billion year history, volcanic eruptions and plate tectonics have been shaping lifefor better or worse. A volcanic eruption sometime around 2,900 BCE in what is now Northern Europe may have blocked out the sun and subsequently harmed the agriculture-depended Neolithic peoples living there. Now, climatological evidence indicates that some shale sun stones found in present-day Denmark likely were related to this cataclysmic volcanic eruption. The findings are detailed in a study published January 16 in the journal Antiquity. Get the Popular Science newsletter Breakthroughs, discoveries, and DIY tips sent every weekday. By signing up you agree to our Terms of Service and Privacy Policy.Eruptions so large that they affect the weather on Earth have occurred numerous times in Earths history. In 43 BCE, a volcano in present day Alaska spewed so much sulphur into the air that Ancient Greek and Roman sources documented that harvests failed and famine and disease spread for years. Mount Tambora in Indonesia erupted in 1815 and led to the year without a summer in Europe and the United States, cooling temperatures for over three years and reducing crop yields.While scientists do not have any similar written sources of eruptions from Neolithic timesabout 10,000 to 2,000 BCEthey do have ice cores. These long time capsules that are brought up from glaciers can show millions of years of climate history.In this new study, climate scientists analyzed ice cores from the Greenland ice sheet. They analyzed the rings on some bites of wood in the cores and found evidence of frost during the spring and summer months both before and after 2,900 BCE. The team could also see reduced radiation from the sun and consequent cooling. Both of these can be traced in parts of North America and Europe around this same time period.That new climate data from the ice cores sheds new light on a group of unique archaeological artifacts called sun stones from the Vasagrd site on the island of Bornholm in Denmark.We have known for a long time that the sun was the focal point for the early agricultural cultures we know of in Northern Europe, study co-author and University of Copenhagen archeologist Rune Iversen said in a statement. They farmed the land and depended on the sun to bring home the harvest. If the sun almost disappeared due to mist in the stratosphere for longer periods of time, it would have been extremely frightening for them.Several sun stones were found at the Vasagrd West site in 2017. They are flat pieces of shale with engraving of the sun. The stones are believed to have symbolized fertility and may have been sacrificed to ensure sun and growth. The sun stones were put into ditches along with animal bones-belived to be the remains of ritual feastsbroken clay vessels, and flint objects sometime around 2,900 BCE. The ditches were then closed up with the sacrificial objects inside.According to the team, there is a high chance that a connection exists between the volcanic eruption, the subsequent changes in climate, and the ritual sun stone sacrifices. In addition to a deteriorating climate, Northern European Neolithic cultures were also affected by the plague and a major shift in cultural traditions.It is reasonable to believe that the Neolithic people on Bornholm wanted to protect themselves from further deterioration of the climate by sacrificing sun stones, said Iversen. Or perhaps they wanted to show their gratitude that the sun had returned again.Four of the sun stones are slated to go on display at the The National Museum of Denmark in Copenhagen on January 28.The sunstones are completely unique, also in a European context, Lasse Vilien Srensen, a study co-author and senior researcher at The National Museum of Denmark, said in a statement. The closest we get to a similar sun-cult in the Neolithic is some passage graves in southern Scandinavia or henge structures like Stonehenge in England, which some researchers associate with the sun.
    0 Σχόλια ·0 Μοιράστηκε ·35 Views
  • Championing queer scientists of colour: I dont think weve scratched the surface on systemic exclusion
    www.nature.com
    Nature, Published online: 16 January 2025; doi:10.1038/d41586-024-04069-8Arachnologist Lauren Esposito develops community platforms to share LGBTQ+ voices and wants to shed light on the overlooked intersectional barriers affecting queer people of colour.
    0 Σχόλια ·0 Μοιράστηκε ·34 Views
  • 0 Σχόλια ·0 Μοιράστηκε ·37 Views
  • Refining my X-ray technique from earlier today
    i.redd.it
    submitted by /u/Teton12355 [link] [comments]
    0 Σχόλια ·0 Μοιράστηκε ·38 Views
  • 0 Σχόλια ·0 Μοιράστηκε ·8 Views