• WWW.FACEBOOK.COM
    Photos from Games Mix's post
    #GamesMix | #PS5Pro | #PlayStation5
    0 Комментарии 0 Поделились 180 Просмотры
  • WWW.FACEBOOK.COM
    Photos from American Society of Landscape Architects's post
    "Early design phases are the most important time to set decarbonization goals. The design work that follows should: - Prioritize strategic, low-carbon decisions- Follow through with those decisions during construction documentation- Successfully deliver a low-carbon project" -- Alejandra Hinojosa, Affil. ASLA, and Mariana Ricker, ASLA, SWA Group ICYMI: ASLA has released a NEW free guide to Decarbonizing the Design Process This guide offers a phase-by-phase structure to decarbonize design through big ideas, strategies, and best practices. It is high-level, offering approaches that can be implemented regardless of project type, scope, and scale.Access the free guide: https://bit.ly/47RrXl1Image Credit: ASLA 2020 Professional General Design Honor Award. Naval Cemetery Landscape, Brooklyn, NY. Nelson Byrd Woltz Landscape Architects / Max Touhey
    0 Комментарии 0 Поделились 396 Просмотры
  • WWW.FACEBOOK.COM
    Thunderbolts Teaser
    Thunderbolts TeaserA group of supervillains are recruited to go on missions for the government.https://adapt.one/editorial/link/180/Thunderbolts+Teaser/
    0 Комментарии 0 Поделились 174 Просмотры
  • WWW.FACEBOOK.COM
    Lost in the Sky (Short film)
    Lost in the Sky (Short film)A lone rescue robot in a strange galaxy must reach a surviving astronaut before she's consumed by a looming black hole. A live-action space adventure made entirely with practical effects.https://adapt.one/editorial/link/179/Lost+in+the+Sky+%28Short+film%29/
    0 Комментарии 0 Поделились 172 Просмотры
  • WWW.FACEBOOK.COM
    Making video games hyper realistic
    Making video games hyper realisticBluedrake42 is working on a spectacular new tool that adds an AI layer to your video game projects and makes them look absolutely spectacular.https://adapt.one/editorial/link/178/Making+video+games+hyper+realistic/
    0 Комментарии 0 Поделились 173 Просмотры
  • EN.WIKIPEDIA.ORG
    Wikipedia picture of the day for November 11
    Shirley Graham Du Bois (November11, 1896 March27, 1977) was an American-Ghanaian writer, playwright, composer, and activist for African-American causes. Born in Indianapolis to an Episcopal minister, she moved with her family throughout the United States as a child. After marrying her first husband, she moved to Paris to study music at the Sorbonne. After her divorce and return to the United States, Graham Du Bois took positions at Howard University and Morgan College before completing her BA and master's at Oberlin College in Ohio. Her first major work was the opera Tom-Tom, which premiered in Cleveland in 1932. She married W.E.B. Du Bois in 1951, and the couple later lived in Ghana, Tanzania and China. She won several prizes, including an Anisfield-Wolf Book Award for her 1949 biography of Benjamin Banneker. This photograph of Graham Du Bois was taken by Carl Van Vechten in 1946.Photograph credit: Carl Van Vechten; restored by Adam CuerdenRecently featured: European spruce bark beetlePyromorphiteJohn TarletonArchiveMore featured pictures
    0 Комментарии 0 Поделились 148 Просмотры
  • EN.WIKIPEDIA.ORG
    On this day: November 11
    November 11: Armistice Day (known as Remembrance Day in the Commonwealth of Nations and Veterans Day in the United States); Singles' Day in China and Southeast AsiaShrine of Remembrance1778 American Revolutionary War: British forces and their Iroquois allies attacked a fort and the village of Cherry Valley, New York, killing 14 soldiers and 30 civilians.1813 War of 1812: BritishCanadian forces repelled an American attack at the Battle of Crysler's Farm, forcing the United States to give up their attempt to capture Montreal.1934 The Shrine of Remembrance (pictured), a memorial to all Australians who have served in war, opened in Melbourne.1999 The House of Lords Act was given royal assent, removing most hereditary peers from the British House of Lords.2008 After 30 years in power, Maumoon Abdul Gayoom was succeeded by Mohamed Nasheed as president of the Maldives.Martha Annie Whiteley (b.1866)douard Vuillard (b.1868)Maria Teresa de Filippis (b.1926)Leonardo DiCaprio (b.1974)More anniversaries: November 10November 11November 12ArchiveBy emailList of days of the yearAbout
    0 Комментарии 0 Поделились 130 Просмотры
  • VENTUREBEAT.COM
    AGI is coming faster than we think we must get ready now
    As we are on the brink of breakthroughs in AGI and superintelligence, we need to assess whether we are truly ready for this transformation.Read More
    0 Комментарии 0 Поделились 129 Просмотры
  • WWW.MARKTECHPOST.COM
    Salesforce AI Research Introduces Moirai-MoE: A MoE Time Series Foundation Model that Achieves Token-Level Model Specialization Autonomously
    Time series forecasting has long been integral to finance, healthcare, meteorology, and supply chain management. Its main objective is to predict future data points based on historical observations, which can be challenging due to the complex and varying nature of time series data. Recent advancements in machine learning, particularly foundation models, have transformed this domain by creating generalized models capable of handling various time series without specialized, case-specific training. These foundation models mark a significant shift from traditional approaches that required multiple models tailored to specific datasets. However, the diversity in time series characteristics, such as variations in frequency, seasonality, and underlying patterns, continues to present substantial challenges for unified model training.A key problem in time series forecasting is handling data heterogeneity effectively. Time series data from different sources vary significantly regarding frequency, distribution, and structure. Current forecasting models often rely on human-defined frequency-based specialization to address this diversity. However, frequency alone is not a reliable indicator of a time series pattern, as data with similar frequencies may exhibit distinct behaviors. Conversely, data with different frequencies may display similar patterns. This approach must capture the complexity and diversity inherent in real-world time series. Another challenge lies in the non-stationary nature of time series data, where the statistical properties of the data change over time, making it difficult to model accurately with frequency-based grouping.Existing time series forecasting methods attempt to address data variability with varied approaches. For instance, models such as TEMPO and UniTime incorporate language-based prompts to help the model discern different data sources, achieving limited dataset-level specialization. Other models, like TimesFM, maintain frequency-specific embedding dictionaries to aid in distinguishing between data types based on frequency. However, many models, including the widely recognized Chronos series, opt for a generalized structure without specialized modules, increasing model complexity and large parameter demands. The challenge with these methods is their inability to fully capture the diverse nature of time series data, as frequency alone only sometimes correlates with underlying data patterns, leading to inefficiencies and compromised model accuracy.Researchers from Salesforce AI Research, the National University of Singapore, and the Hong Kong University of Science and Technology introduced an innovative model called MOIRAI-MoE. MOIRAI-MoE integrates a sparse mixture of experts (MoE) within its Transformer architecture, allowing token-level specialization without human-defined frequency heuristics. This data-driven approach minimizes dependency on predefined frequency-based layers and uses a single input/output projection layer, enabling the model to automatically capture and represent diverse patterns. By achieving token-level specialization, MOIRAI-MoE provides a more flexible and efficient solution capable of better representing the unique characteristics of varied time series data without requiring distinct models for each frequency category.MOIRAI-MoEs architecture leverages a gating function that assigns each token to an appropriate expert within the Transformer layers based on token clustering derived from a pretrained model. This clustering approach is guided by the Euclidean distance to centroids, allowing tokens with similar patterns to be processed by the same expert while specialized experts handle diverse tokens. By incorporating 32 expert networks, each focusing on unique time series characteristics, MOIRAI-MoE effectively reduces computational overhead while enhancing its ability to generalize across different data types. This approach enables MOIRAI-MoE to excel in representing non-stationary time series data by dynamically adapting to pattern shifts within the data.Extensive testing across 39 datasets demonstrated the superior performance of MOIRAI-MoE in both in-distribution and zero-shot forecasting scenarios. For in-distribution forecasting, MOIRAI-MoE outperformed its dense model counterpart by up to 17%, showcasing a significant improvement in accuracy while utilizing up to 65 times fewer activated parameters than other leading models, including TimesFM and Chronos. In zero-shot forecasting, where the model was tested on datasets not included in the training data, MOIRAI-MoEs performance surpassed traditional models. In these tests, MOIRAI-MoE achieved a 3-14% improvement in continuous ranked probability score (CRPS) and an 8-16% improvement in mean absolute scaled error (MASE) over prior models. These results underscore the models robust generalization ability without requiring task-specific training.This research presents key takeaways that highlight the advancements MOIRAI-MoE brings to time series forecasting:Data-Driven Specialization: By achieving token-level specialization through a sparse mixture of experts, MOIRAI-MoE overcomes the limitations of human-defined frequency specialization, allowing for a more nuanced representation of time series diversity.Computational Efficiency: The models sparse expert activation drastically reduces computational demands, achieving up to 65 times fewer activated parameters while maintaining high accuracy.Performance Gains: Testing on diverse datasets confirmed that MOIRAI-MoE surpasses dense models and foundational models like TimesFM and Chronos, achieving a 17% improvement over dense counterparts in in-distribution tests.Scalability and Generalization: MOIRAI-MoE demonstrates strong zero-shot performance, making it highly applicable to real-world forecasting tasks without requiring specialized training for each application, which is critical in diverse applications like finance, healthcare, and climate modeling.In conclusion, MOIRAI-MoE represents a major advancement in time series forecasting by introducing a flexible, data-driven approach that overcomes the limitations of frequency-based specialization. With its sparse mixture of expert architecture, MOIRAI-MoE addresses the diverse and non-stationary nature of time series data and achieves significant computational efficiency and performance gains. This novel approach underscores the potential of token-level specialization, paving the way for future improvements in time series foundation models and expanding the utility of zero-shot forecasting across various industries and applications.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Listen to our latest AI podcasts and AI research videos here
    0 Комментарии 0 Поделились 136 Просмотры
  • TOWARDSAI.NET
    Can a LLM beat you At Chess?
    Author(s): Arthur Lagacherie Originally published on Towards AI. We can use Outlines to answer this question.Recently, I discovered a Python package called Outlines, which provides a versatile way to leverage Large Language Models (LLMs) for tasks like:ClassificationNamed Entity ExtractionGenerate synthetic dataSummarize a documentAnd Play Chess (there are also 5 other uses).GitHub dottxt-ai/outlines: Structured Text GenerationStructured Text Generation. Contribute to dottxt-ai/outlines development by creating an account on GitHub.github.comIn this article, I will explore various configurations for chess games, including human-versus-LLM (large language model) matches, where a human competes against an AI model, as well as LLM-versus-LLM setups, where two AI models play against each other.How it worksTo accomplish this task easily, Outlines uses a sampling technique different from the usual one.First, what is sampling in an LLM? When generating the next token, an LLM returns a probability for each token in its vocabulary, ranging from 0% to 100%. There are various ways to select from these predicted tokens, and this selection process is known as sampling.Outlines, instead of applying sampling to all tokens, select only the tokens related to the text format you want to generate and then apply sampling to this subset.To choose the tokens related to the text format outlines use a regex updated each move to only match with legal moves.Efficient Guided Generation for Large Language ModelsIn this article we show how the problem of neural text generation can be constructively reformulated in terms ofarxiv.orgLLM vs LLMThe first thing I want to do is LLM vs. LLM but just one LLM to begin. To do this we need some Python libraries.!pip install outlines -q!pip install chess -q!pip install transformers accelerate einops -qimport chess, chess.svg, refrom outlines import generate, modelsfrom IPython.display import Image, display, clear_outputChess: a library to handle the board.IPython, chess.svg: libraries to display the board.After that, the first thing we need is the function to create the regex that specifies to Outlines the text format.def legal_moves_regex(board): """Build a regex that only matches valid moves.""" legal_moves = list(board.legal_moves) legal_modes_str = [board.san(move) for move in legal_moves] legal_modes_str = [re.sub(r"[+#]", "", move) for move in legal_modes_str] regex_pattern = "|".join(re.escape(move) for move in legal_modes_str) regex_pattern = f"{regex_pattern}" return regex_patternThis function will return a text like this.'Nh3|Nf3|Nc3|Na3|h3|g3|f3|e3|d3|c3|b3|a3|h4|g4|f4|e4|d4|c4|b4|a4'Its all the legal move of the board state.Now we have the libraries and the regex generator we can download the model by executing the following line of code.model = models.transformers("google/gemma-2-2b-it", device="auto")And the final cell of code to run the main loop.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): regex_pattern = legal_moves_regex(board) structured = generate.regex(model, regex_pattern)(prompt + board_state) move = board.parse_san(structured) if turn_number % 2 == 0 : # It's White's turn board_state += board.san(move) + " " else: board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move))First, we define the chessboard, the prompt, the board state, and the turn number. Then we create a while for the game. For each turn, we generate the regex and the move, then update the board state, and to finish displaying the chessboard.Lets run it.video by authorGemma 2b vs. Smollm2 1.7bNow its time to do the same but with two LLMs. Lets import it.model1 = models.transformers("Arthur-LAGACHERIE/Gemma-2-2b-4bit", device="cuda")model2 = models.transformers("HuggingFaceTB/SmolLM2-1.7B-Instruct", device="cuda")Note: here I use a quantized version of Gemma 2b before I install bitsandbytes pip install -q bitsandbytes.And we also need to change the game function a little.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): if turn_number % 2 == 0 : # It's White's turn regex_pattern = legal_moves_regex(board) structured = generate.regex(model1, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " else: regex_pattern = legal_moves_regex(board) structured = generate.regex(model2, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move)) print("0" if turn_number % 2 != 0 else "1")(I also add the last line to print the winner)Lets run it.gemma vs. smollm2 (gif by the author)After a long and difficult (and also dozen and dozen of dumb moves) war between Gemma 2b and Smollm2 1.7b the winner is: Smollm2 But if you look at the game more deeply you will see some dumb moves. The two LLMs play like a 3-year old human.LLM vs. HumanNow that weve seen LLMs pitted against each other, lets see how a language model fares against a human player (me).First, lets download the model, I will take Smollm2 1.7b because he wins against Gemma 2b.model = models.transformers("HuggingFaceTB/SmolLM2-1.7B-Instruct", device="auto")Then, we need to update the main while a little.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")display(chess.svg.board(board, size=250))prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): if turn_number % 2 == 0 : # It's White's turn inp = input("Your move: ") move = board.parse_san(inp) board_state += board.san(move) + " " else: regex_pattern = legal_moves_regex(board) structured = generate.regex(model, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move))print("0" if turn_number % 2 != 0 else "1") And run it.me vs. Smollm2, video by authorI won in 3 minutes; the models chess skills are quite limited.ConclusionThe models arent very intelligent at chess, likely due to their reduced number of parameters.With the guidance from this article, you can now experiment with LLMs in a chess setting though you may not see grandmaster-level gameplay.I hope you enjoy this article and if this is the case you can clap it. (you can also follow me =).Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Комментарии 0 Поделились 162 Просмотры