• Making video games hyper realistic
    www.facebook.com
    Making video games hyper realisticBluedrake42 is working on a spectacular new tool that adds an AI layer to your video game projects and makes them look absolutely spectacular.https://adapt.one/editorial/link/178/Making+video+games+hyper+realistic/
    0 Commentaires ·0 Parts ·147 Vue
  • Wikipedia picture of the day for November 11
    en.wikipedia.org
    Shirley Graham Du Bois (November11, 1896 March27, 1977) was an American-Ghanaian writer, playwright, composer, and activist for African-American causes. Born in Indianapolis to an Episcopal minister, she moved with her family throughout the United States as a child. After marrying her first husband, she moved to Paris to study music at the Sorbonne. After her divorce and return to the United States, Graham Du Bois took positions at Howard University and Morgan College before completing her BA and master's at Oberlin College in Ohio. Her first major work was the opera Tom-Tom, which premiered in Cleveland in 1932. She married W.E.B. Du Bois in 1951, and the couple later lived in Ghana, Tanzania and China. She won several prizes, including an Anisfield-Wolf Book Award for her 1949 biography of Benjamin Banneker. This photograph of Graham Du Bois was taken by Carl Van Vechten in 1946.Photograph credit: Carl Van Vechten; restored by Adam CuerdenRecently featured: European spruce bark beetlePyromorphiteJohn TarletonArchiveMore featured pictures
    0 Commentaires ·0 Parts ·103 Vue
  • On this day: November 11
    en.wikipedia.org
    November 11: Armistice Day (known as Remembrance Day in the Commonwealth of Nations and Veterans Day in the United States); Singles' Day in China and Southeast AsiaShrine of Remembrance1778 American Revolutionary War: British forces and their Iroquois allies attacked a fort and the village of Cherry Valley, New York, killing 14 soldiers and 30 civilians.1813 War of 1812: BritishCanadian forces repelled an American attack at the Battle of Crysler's Farm, forcing the United States to give up their attempt to capture Montreal.1934 The Shrine of Remembrance (pictured), a memorial to all Australians who have served in war, opened in Melbourne.1999 The House of Lords Act was given royal assent, removing most hereditary peers from the British House of Lords.2008 After 30 years in power, Maumoon Abdul Gayoom was succeeded by Mohamed Nasheed as president of the Maldives.Martha Annie Whiteley (b.1866)douard Vuillard (b.1868)Maria Teresa de Filippis (b.1926)Leonardo DiCaprio (b.1974)More anniversaries: November 10November 11November 12ArchiveBy emailList of days of the yearAbout
    0 Commentaires ·0 Parts ·90 Vue
  • AGI is coming faster than we think we must get ready now
    venturebeat.com
    As we are on the brink of breakthroughs in AGI and superintelligence, we need to assess whether we are truly ready for this transformation.Read More
    0 Commentaires ·0 Parts ·88 Vue
  • Salesforce AI Research Introduces Moirai-MoE: A MoE Time Series Foundation Model that Achieves Token-Level Model Specialization Autonomously
    www.marktechpost.com
    Time series forecasting has long been integral to finance, healthcare, meteorology, and supply chain management. Its main objective is to predict future data points based on historical observations, which can be challenging due to the complex and varying nature of time series data. Recent advancements in machine learning, particularly foundation models, have transformed this domain by creating generalized models capable of handling various time series without specialized, case-specific training. These foundation models mark a significant shift from traditional approaches that required multiple models tailored to specific datasets. However, the diversity in time series characteristics, such as variations in frequency, seasonality, and underlying patterns, continues to present substantial challenges for unified model training.A key problem in time series forecasting is handling data heterogeneity effectively. Time series data from different sources vary significantly regarding frequency, distribution, and structure. Current forecasting models often rely on human-defined frequency-based specialization to address this diversity. However, frequency alone is not a reliable indicator of a time series pattern, as data with similar frequencies may exhibit distinct behaviors. Conversely, data with different frequencies may display similar patterns. This approach must capture the complexity and diversity inherent in real-world time series. Another challenge lies in the non-stationary nature of time series data, where the statistical properties of the data change over time, making it difficult to model accurately with frequency-based grouping.Existing time series forecasting methods attempt to address data variability with varied approaches. For instance, models such as TEMPO and UniTime incorporate language-based prompts to help the model discern different data sources, achieving limited dataset-level specialization. Other models, like TimesFM, maintain frequency-specific embedding dictionaries to aid in distinguishing between data types based on frequency. However, many models, including the widely recognized Chronos series, opt for a generalized structure without specialized modules, increasing model complexity and large parameter demands. The challenge with these methods is their inability to fully capture the diverse nature of time series data, as frequency alone only sometimes correlates with underlying data patterns, leading to inefficiencies and compromised model accuracy.Researchers from Salesforce AI Research, the National University of Singapore, and the Hong Kong University of Science and Technology introduced an innovative model called MOIRAI-MoE. MOIRAI-MoE integrates a sparse mixture of experts (MoE) within its Transformer architecture, allowing token-level specialization without human-defined frequency heuristics. This data-driven approach minimizes dependency on predefined frequency-based layers and uses a single input/output projection layer, enabling the model to automatically capture and represent diverse patterns. By achieving token-level specialization, MOIRAI-MoE provides a more flexible and efficient solution capable of better representing the unique characteristics of varied time series data without requiring distinct models for each frequency category.MOIRAI-MoEs architecture leverages a gating function that assigns each token to an appropriate expert within the Transformer layers based on token clustering derived from a pretrained model. This clustering approach is guided by the Euclidean distance to centroids, allowing tokens with similar patterns to be processed by the same expert while specialized experts handle diverse tokens. By incorporating 32 expert networks, each focusing on unique time series characteristics, MOIRAI-MoE effectively reduces computational overhead while enhancing its ability to generalize across different data types. This approach enables MOIRAI-MoE to excel in representing non-stationary time series data by dynamically adapting to pattern shifts within the data.Extensive testing across 39 datasets demonstrated the superior performance of MOIRAI-MoE in both in-distribution and zero-shot forecasting scenarios. For in-distribution forecasting, MOIRAI-MoE outperformed its dense model counterpart by up to 17%, showcasing a significant improvement in accuracy while utilizing up to 65 times fewer activated parameters than other leading models, including TimesFM and Chronos. In zero-shot forecasting, where the model was tested on datasets not included in the training data, MOIRAI-MoEs performance surpassed traditional models. In these tests, MOIRAI-MoE achieved a 3-14% improvement in continuous ranked probability score (CRPS) and an 8-16% improvement in mean absolute scaled error (MASE) over prior models. These results underscore the models robust generalization ability without requiring task-specific training.This research presents key takeaways that highlight the advancements MOIRAI-MoE brings to time series forecasting:Data-Driven Specialization: By achieving token-level specialization through a sparse mixture of experts, MOIRAI-MoE overcomes the limitations of human-defined frequency specialization, allowing for a more nuanced representation of time series diversity.Computational Efficiency: The models sparse expert activation drastically reduces computational demands, achieving up to 65 times fewer activated parameters while maintaining high accuracy.Performance Gains: Testing on diverse datasets confirmed that MOIRAI-MoE surpasses dense models and foundational models like TimesFM and Chronos, achieving a 17% improvement over dense counterparts in in-distribution tests.Scalability and Generalization: MOIRAI-MoE demonstrates strong zero-shot performance, making it highly applicable to real-world forecasting tasks without requiring specialized training for each application, which is critical in diverse applications like finance, healthcare, and climate modeling.In conclusion, MOIRAI-MoE represents a major advancement in time series forecasting by introducing a flexible, data-driven approach that overcomes the limitations of frequency-based specialization. With its sparse mixture of expert architecture, MOIRAI-MoE addresses the diverse and non-stationary nature of time series data and achieves significant computational efficiency and performance gains. This novel approach underscores the potential of token-level specialization, paving the way for future improvements in time series foundation models and expanding the utility of zero-shot forecasting across various industries and applications.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Listen to our latest AI podcasts and AI research videos here
    0 Commentaires ·0 Parts ·94 Vue
  • Can a LLM beat you At Chess?
    towardsai.net
    Author(s): Arthur Lagacherie Originally published on Towards AI. We can use Outlines to answer this question.Recently, I discovered a Python package called Outlines, which provides a versatile way to leverage Large Language Models (LLMs) for tasks like:ClassificationNamed Entity ExtractionGenerate synthetic dataSummarize a documentAnd Play Chess (there are also 5 other uses).GitHub dottxt-ai/outlines: Structured Text GenerationStructured Text Generation. Contribute to dottxt-ai/outlines development by creating an account on GitHub.github.comIn this article, I will explore various configurations for chess games, including human-versus-LLM (large language model) matches, where a human competes against an AI model, as well as LLM-versus-LLM setups, where two AI models play against each other.How it worksTo accomplish this task easily, Outlines uses a sampling technique different from the usual one.First, what is sampling in an LLM? When generating the next token, an LLM returns a probability for each token in its vocabulary, ranging from 0% to 100%. There are various ways to select from these predicted tokens, and this selection process is known as sampling.Outlines, instead of applying sampling to all tokens, select only the tokens related to the text format you want to generate and then apply sampling to this subset.To choose the tokens related to the text format outlines use a regex updated each move to only match with legal moves.Efficient Guided Generation for Large Language ModelsIn this article we show how the problem of neural text generation can be constructively reformulated in terms ofarxiv.orgLLM vs LLMThe first thing I want to do is LLM vs. LLM but just one LLM to begin. To do this we need some Python libraries.!pip install outlines -q!pip install chess -q!pip install transformers accelerate einops -qimport chess, chess.svg, refrom outlines import generate, modelsfrom IPython.display import Image, display, clear_outputChess: a library to handle the board.IPython, chess.svg: libraries to display the board.After that, the first thing we need is the function to create the regex that specifies to Outlines the text format.def legal_moves_regex(board): """Build a regex that only matches valid moves.""" legal_moves = list(board.legal_moves) legal_modes_str = [board.san(move) for move in legal_moves] legal_modes_str = [re.sub(r"[+#]", "", move) for move in legal_modes_str] regex_pattern = "|".join(re.escape(move) for move in legal_modes_str) regex_pattern = f"{regex_pattern}" return regex_patternThis function will return a text like this.'Nh3|Nf3|Nc3|Na3|h3|g3|f3|e3|d3|c3|b3|a3|h4|g4|f4|e4|d4|c4|b4|a4'Its all the legal move of the board state.Now we have the libraries and the regex generator we can download the model by executing the following line of code.model = models.transformers("google/gemma-2-2b-it", device="auto")And the final cell of code to run the main loop.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): regex_pattern = legal_moves_regex(board) structured = generate.regex(model, regex_pattern)(prompt + board_state) move = board.parse_san(structured) if turn_number % 2 == 0 : # It's White's turn board_state += board.san(move) + " " else: board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move))First, we define the chessboard, the prompt, the board state, and the turn number. Then we create a while for the game. For each turn, we generate the regex and the move, then update the board state, and to finish displaying the chessboard.Lets run it.video by authorGemma 2b vs. Smollm2 1.7bNow its time to do the same but with two LLMs. Lets import it.model1 = models.transformers("Arthur-LAGACHERIE/Gemma-2-2b-4bit", device="cuda")model2 = models.transformers("HuggingFaceTB/SmolLM2-1.7B-Instruct", device="cuda")Note: here I use a quantized version of Gemma 2b before I install bitsandbytes pip install -q bitsandbytes.And we also need to change the game function a little.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): if turn_number % 2 == 0 : # It's White's turn regex_pattern = legal_moves_regex(board) structured = generate.regex(model1, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " else: regex_pattern = legal_moves_regex(board) structured = generate.regex(model2, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move)) print("0" if turn_number % 2 != 0 else "1")(I also add the last line to print the winner)Lets run it.gemma vs. smollm2 (gif by the author)After a long and difficult (and also dozen and dozen of dumb moves) war between Gemma 2b and Smollm2 1.7b the winner is: Smollm2 But if you look at the game more deeply you will see some dumb moves. The two LLMs play like a 3-year old human.LLM vs. HumanNow that weve seen LLMs pitted against each other, lets see how a language model fares against a human player (me).First, lets download the model, I will take Smollm2 1.7b because he wins against Gemma 2b.model = models.transformers("HuggingFaceTB/SmolLM2-1.7B-Instruct", device="auto")Then, we need to update the main while a little.board = chess.Board("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")display(chess.svg.board(board, size=250))prompt = "Let's play Chess. Moves: "board_state = " "turn_number = 0while not board.is_game_over(): if turn_number % 2 == 0 : # It's White's turn inp = input("Your move: ") move = board.parse_san(inp) board_state += board.san(move) + " " else: regex_pattern = legal_moves_regex(board) structured = generate.regex(model, regex_pattern)(prompt + board_state) move = board.parse_san(structured) board_state += board.san(move) + " " + str(turn_number) + "." turn_number += 1 board.push(move) clear_output(wait=True) display(chess.svg.board(board, size=250, lastmove=move))print("0" if turn_number % 2 != 0 else "1") And run it.me vs. Smollm2, video by authorI won in 3 minutes; the models chess skills are quite limited.ConclusionThe models arent very intelligent at chess, likely due to their reduced number of parameters.With the guidance from this article, you can now experiment with LLMs in a chess setting though you may not see grandmaster-level gameplay.I hope you enjoy this article and if this is the case you can clap it. (you can also follow me =).Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentaires ·0 Parts ·120 Vue
  • Faster Knowledge Distillation Using Uncertainty-Aware Mixup
    towardsai.net
    Author(s): Tata Ganesh Originally published on Towards AI. Photo by Jaredd Craig on UnsplashIn this article, we will review the paper titled Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup [1], which aims to reduce the computational cost associated with distilling the knowledge of computer vision models.Disclaimer: This papers arxiv draft was published in 2020, so some of the teacher models mentioned in the results are small models by todays standards.Knowledge DistillationKnowledge distillation (KD) is the process of transferring learning from a larger model (called the teacher) to a smaller model (called the student). It is used to create compressed models that can be run on resource-constrained environments. Further, KD yields a more accurate model compared to a model that is trained from scratch. In the original knowledge distillation paper by Hinton et al. [2], the student model is trained using the output logits from the teacher model for each training sample. The ground-truth labels are also included during training if they are available. This process is illustrated below.Knowledge Distillation framework. Figure by author. Dog image from CIFAR-10 dataset [3]Computational Cost of Knowledge DistillationFirst, let us define the different floating point operations that contribute to KDs computational cost. Note that these operations are defined per image.F = Teacher forward pass (to get output logits from teacher model)F = Student forward pass (to get output logits from student model)B = Student backward pass (to update weights of the student model)The breakdown of the typical KD process for a mini-batch of N images is as follows:A mini-batch of N images is passed through the teacher and the student models. The cost of this forward pass is F + F.A distillation loss is applied between the teacher and the student models for different layers.The student models weights are updated during the backward pass. The cost of this backward pass is B.Note: Since the teacher model is much larger than the student model, we can assume that F >> F, F >> B and F = B.This process can be summarized using the following figure:Framework of Knowledge Distillation [1]Hence, the total cost of KD for a mini-batch of N images is:Computational Cost of KD [1]Reducing the number of images passed to the teacher model can lead to an overall reduction in the computational cost of KD. So, how can we sample images from each mini-batch to reduce the cost associated with the teacher models forward pass operation? Katharopoulos et al. [4] claim that all samples in a dataset are not equally important for neural network training. They propose an importance sampling technique to focus computation on informative examples during training. Similarly, the importance or informativeness of examples in a mini-batch can be used to sample only informative examples and pass them to the teacher model. In the next section, we will discuss how the proposed method, named UNIX, performs this sampling.UNcertainty-aware mIXup (UNIX)UNIX Framework [1]The sequence of steps for each mini-batch in UNIX is as follows:Step 1: Student forward passEach mini-batch of images is fed to the student model to obtain the predicted class probabilities for each image.Step 2: Uncertainty EstimationFor each image, the predicted probabilities are used to generate an uncertainty estimate. The uncertainty value loosely indicates the prediction confidence of the student model for each image. The higher the uncertainty, the lower the confidence. Based on Active Learning literature [5], uncertainty can be used to estimate the informativeness of each image. For example, the authors use entropy of the student models predicted probability distribution to quantify uncertainty.Uncertainty quantification using entropy [1]Step 3: Shuffling and Sorting the mini-batchThe mini-batch is then sorted in decreasing order of sample uncertainties. Let us name the sorted mini-batch Bsorted. Further, the original mini-batch is shuffled. Let us name the shuffled mini-batch Bshuffled.Step 4: Uncertainty-Aware MixupMixup [6] is a data augmentation technique that performs a convex combination of two images and their corresponding labels in a mini-batch. Mixup has been shown to improve the generalization of neural networks.Mixup Data Augmentation [6]. is used to control the magnitude of mixup.The authors propose to use mixup as a way to compress information from two images into one, then feed the mixed image to the teacher and student models for KD. An element-wise mixup is performed between images in Bsorted and Bshuffled. Specifically,Performing mixup based on sample uncertainty [1]Here, c is a correction factor, which is a function of each samples uncertainty. c ensures that mixup is mild for uncertain samples and strong for confident samples. Note that labels are NOT mixed.Step 5: Sampling and Teacher forward passAfter performing mixup, k images are sampled from the N mixed images. These k mixed images are fed as input to the teacher and student models for KD.Comparing Computational CostsConsider the case where batch size N = 64 and k = 40. Then, the computational cost of a forward pass for a mini-batch with and without UNIX is (Note that the final cost is expressed with respect to the student model) :Example of Computation Cost of KD with and without UNIX. Figure by Author.In our example, KD with UNIX yields a ~25% reduction in computational cost, improving the computational efficiency of the distillation process.ResultsCIFAR-100 ResultsResults of different model architectures on the CIFAR-100 [2] image classification dataset are shown below.KD results on CIFAR-100 [1]. WRN means Wide Resnet [7].In most cases, the performance of UNIXKD is on par with original KD. Specifically, UNIXKD with k=36 provides a good tradeoff between accuracy and computational cost. Further, random sampling with KD (Random+KD) performs on par or worse than UNIXKD for all model architectures, highlighting the importance of uncertainty-based sampling in improving computational efficiency with minimal reduction in accuracy.ImageNet resultsResults on the ImageNet [8] dataset are shown below.KD results on ImageNet[1].The columns with +label specify KD with ground truth labels. For experiments with and without ground truth labels, UNIXKD performs on par with original KD while reducing the total computational cost by ~23%.ConclusionKnowledge Distillation is a technique used for transferring the knowledge of a large teacher model into a small student model. However, the high computational cost of performing a forward pass through the teacher model makes the distillation process computationally expensive. To tackle this problem, UNcertainty-aware mIXup (UNIX) uses uncertainty sampling and the mixup augmentation technique to pass a smaller number of images to the teacher model. Experiments on CIFAR 100 and ImageNet datasets show that UNIX can reduce the computational cost of knowledge distillation by 25% with minimal reduction in classification performance.References[1] G. Xu, Z. Liu, and C. Change Loy. Computation-Efficient Knowledge Distillation via Uncertainty-Aware (2020), arXiv preprint arXiv:2012.09413.[2] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network (2015), arXiv preprint arXiv:1503.02531.[3] A. Krizhevsky and G. Hinton. Learning multiple (2009).layers of features from tiny images[4] A. Katharopoulos and F. Fleuret. Not all sam- (2018), International conference on ples are created equal: Deep learning with importance sam-plingmachine learning. PMLR.[5] B. Settles. Active learning literature survey (2010), University of Wisconsin, Madison, 52(5566):11.[6] H. Zhang, M. Cisse, Y. Dauphin, and D. Lopez-Paz. mixup: Beyond (2018), 6th International Conference on Learning Representations.empirical risk minimization[7] S. Zagoruyko and N. Komodakis. Wide Residual Networks (2017), arXiv preprint arXiv:1605.07146.[8] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei- Fei. Imagenet: A large-scale hierarchical image database (2009), IEEE Conference on Computer Vision and Pattern Recognition.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentaires ·0 Parts ·119 Vue
  • Donkey Kong Country of Super Nintendo World at Universal Studios Japan Is Getting a Direct Tomorrow
    www.ign.com
    Nintendo has announced that Donkey Kong Country of Super Nintendo World at Universal Studios Japan is getting its very own direct tomorrow, November 11, at 2pm PT/5pm ET.Nintendo shared the news on Twitter/X, saying, "Tune in on 11/11 at 2 p.m. PT for a Super Nintendo World Direct livestream!""The stream will be roughly 10 minutes and showcase Donkey Kong Country of #SuperNintendoWorld at Universal Studios Japan," Nintendo continued. "No game information will be featured. #NintendoDirect"While this is a Super Nintendo World Direct focused on Universal Studios Japan, those excited for Super Nintendo World's debut at Orlando's Epic Universe in 2025 should pay attention as it will also have a Donkey Kong Country and the mine-cart ride.The ride is called Mine-Cart Madness and will see guests helping Donkey Kong and Diddy Kong protect the coveted golden banana. Guests will be able to look forward to "getting blasted out of a barrel, seemingly jumping over gaps as they speed along a rickety track, and so much more."Super Nintendo World first opened at Universal Studios Japan in March 2021 and Donkey Kong Country will add a whole new layer of Nintendo-themed fun to the festivities. Alongside Mine Cart Madness, fans will be able to collect the K O N G letters like in the games, enjoy a "selection of tropical menu and merchandise offerings," and much more. Universal Epic Universe: Super Mario Land and Donkey Kong Country ImagesIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250Donkey Kong Country's website says it is still set to open at Japan's Super Nintendo World before the end of 2024, so it's safe to assume we'll get an opening date tomorrow or a delay into 2025.For more, check out our review of Super Nintendo World and further details of Epic Universe before it opens on May 22, 2025.Have a tip for us? Want to discuss a possible story? Please send an email to newstips@ign.com.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst and on TikTok.
    0 Commentaires ·0 Parts ·118 Vue
  • Daily Deals: Final Fantasy I-VI Collection, Silent Hill 2, Mario & Luigi: Brothership, and More
    www.ign.com
    The weekend is officially here, and we've rounded up the best deals you can find! Discover the best deals for Sunday, November 10, below:Final Fantasy I-VI Pixel Remaster CollectionFINAL FANTASY I-VI Collection Anniversary Edition - 2024 (PS4)The first six Final Fantasy titles paved the way for the series as we see it today. Many fans still regard both Final Fantasy IV and Final Fantasy VI as some of the best that Final Fantasy has to offer, with gripping narratives and engaging gameplay. This package includes all six Final Fantasy Pixel Remasters, which feature updated graphics, soundtracks, font, and more. Mario & Luigi: BrothershipMario & Luigi: BrothershipMario & Luigi: Brothership is the first Mario & Luigi title on Nintendo Switch, acting as the first new entry in the series in over nine years. Developed by Acquire, this is the first 3D entry in the series, with plenty of new mechanics to discover. Join Mario and Luigi on this adventure to reconnect the world of Concordia and set sail to many islands on Shipshape Island!Silent Hill 2Silent Hill 2 - PlayStation 5Bloober Team's remake of Silent Hill 2 is on sale at Woot this weekend for $59.99. Recreating one of Konami's most beloved titles was never going to be easy, but the Silent Hill 2 remake delivers an immersive horror experience that preserves almost everything that made the original so great. In our 8/10 review, we said the game "smoothly polishes down the rough edges of the original games combat while taking a piece of heavy grit sandpaper to scuff up every rust and mold-covered surface of its nightmarish environments, successfully making them appear far more abrasive and menacing to explore."Arcane: League of Legends - Season One 4K UHD Blu-rayArcane: League of Legends - Season One - Limited Edition Steelbook 4K Ultra HD + Blu-rayArcane: League of Legends Season 2 is officially out today, and this is a great deal if you've yet to watch Season 1. The complete Season 1 4K UHD Blu-ray collection is only $34.99 at Amazon, which is $25 off its standard price. Packed inside a unique Steelbook, this is perfect for both new viewers and even the biggest of Arcane fans. LG UltraGear 45" OLED Curved MonitorLG UltraGear 45" OLED Curved WQHD 240Hz Gaming MonitorThis weekend, you can save $700 off this UltraGear 45" OLED curved monitor. With a resolution of 1440p and a refresh rate of 240Hz, you can expect a fantastic experience that is perfect for gaming. The OLED panel allows for high color accuracy and a wider viewing angle, so this monitor is also a great option for watching video content or movies.Sony UBP-X700 4K UHD Blu-ray PlayerSony UBP-X700 4K Ultra HD Home Theater Streaming Blu-ray DVD Player If you don't own either an Xbox Series X or PlayStation 5, it's not likely you have a quality 4K UHD Blu-ray Player. This weekend, you can save $90 off this Sony UBP-X700 model, which supports 4K upscaling, HDR10, Dolby Vision, and more. The player even has a HDR-SDR converter, allowing you to watch content on any display with vivid colors.Sony WH-1000XM5 HeadphonesSony WH-1000XM5 HeadphonesThe Sony WH-1000XM5 Headphones are some of the best you can find on the market. Sony made tremendous improvements from previous models, with major upgrades to both noise cancelation and sound quality. In our 9/10 review, we said, "The Sony WH-1000XM5 is hands down the best sounding and most impressive noise-canceling headphones around."Sonic X Shadow GenerationsSonic X Shadow Generations - Nintendo SwitchSonic X Shadow Generations just released last month, and you can already save $10 off a Nintendo Switch copy at Woot. This package includes a remastered version of Sonic Generations and a brand-new campaign focused on Shadow. Both 2D and 3D levels are included, making for the ultimate package for any Sonic fan. Super Mario RPGSuper Mario RPG - Nintendo SwitchThe remake of Super Mario RPG is $31.99 at Woot right now, which is a great price for this classic title. If you've yet to either play the original or check out the remake, this is the perfect time to do so. Composer Yoko Shimimura returned to compose the remake's original soundtrack, and each boss and environment has been expertly recrafted for the Nintendo Switch.Star Wars Jedi: SurvivorStar Wars Jedi: Survivor - PlayStation 4Star Wars Jedi: Survivor - XBOX OneThis weekend, you can save on the PS4 and Xbox One versions of Star Wars Jedi: Survivor. The next chapter of Cal's journey is set years after the ending of Star Wars Jedi: Fallen Order. New lightsaber styles, planets, and more await.
    0 Commentaires ·0 Parts ·119 Vue
  • Wolf Hall: The Mirror and the Light Episode 1 Review: Wreckage
    www.denofgeek.com
    Warning: thisWolf Hallreview contains spoilers.Peter Straughan and Peter Kosminskys exquisite, careful adaptation of Hilary Mantels Cromwell trilogy is finally back after almost a decade after it first aired. For anybody who still has the stomach for despotic rulers and political skulduggery this week, thats cause for celebration. This wise drama deserved to be completed, and a performance as quietly commanding as Mark Rylances deserves our full attention.Youll need to giveWolf Hall: The Mirror and the Lightyour full attention, although effort has been made in this first episode to explain the whos, whats and wherefores. Its easier to follow than the first series thanks to the absence so far of timeline-jumping flashbacks. Its also easier to see, perhaps in response to criticism of the 2015 series atmospherically gloomy, candlelit look. Apart from the scenes inside Cromwells home, almost everything happens in bright daylight. If that continues as a lighting scheme, then its a neat way to divide Cromwells requirement to be one person in private, and another in public.Inside Cromwells chamber is the only element that may trip an unsure viewer up in the form of Jonathan Pryces Cardinal Wolsey. Several years dead by the series 1536 timeline, this Wolsey is a phantom of Cromwells imagination. His conversations with his former mentor are Cromwell talking to himself, and a precious insight into whats going on behind the sorrowful yet alert expression Rylance hides behind at court. With Wolsey, Cromwell can be bold, honest and wry a man in his own right. With Henry, he must be nothing more than an extension of the kings power.We see as much in the scene where Cromwell bodily removes Fitzwilliam, Earl of Southampton from the Privy Council chamber for speaking his mind and telling Harry where hes going wrong in the matter of his daughter Mary. A good attack dog, Cromwell uses his physicality to enforce Henrys will. He also throws his weight around in dealings with Catholic plotters the Poles, taking Sir Geoffrey Pole by the shoulders to quite literally put him in his place when he attempts to stand in his way. However much Cromwells wits are now his weapon, the former soldier who carries a knife up his sleeve is never far away. As he warns French ambassador Chapuys, this blacksmiths son may have lost the art of metalwork, but he can still swing a hammer.With Princess Mary (Lilit Lesser), Cromwell hides the brute and instead shows her the loving father-figure and royal servant. Reading between his lines, she signs the oath of obedience Henry requires of her, and once again, Crumb delivers what Henry demands and is rewarded.Read more With each step of his ascendency though, comes the threat that lowborn Cromwell is rising dangerously high, and thats thanks to Damian Lewis gently terrifying performance as Henry VIII. Lewis imbues the king with vicious menace and in this episode, does most of it behind a smile. His unctuous post-wedding night boast in Cromwells ear about Jane Seymours freshness and maidenly modesty may have been nauseating, but not more so than how Cromwell subsumes himself to his king, measuring his every word and look. When Mary complains, I thought they would all say plain what I know they believe, about the nobles on whose support shed relied for her restoration to the line of succession after Anne Boleyns death, shes showing her naivety. In the court of Henry VIII, saying plain what you believe is no way to survive.Survival both Marys and his own preoccupies Thomas Cromwell in this first episode. As Henry VIIs leading adviser, he has his head in the mouth of a lion and is astute enough to know that one clumsy move will be the end of him.Succession, eat your heart out: no drama better illustrates the precariousness of being in the employ of a tyrant thanWolf Hall. Its historical setting raises the stakes to the skies take a wrong step in the court of Henry VIII and you wont just lose your livelihood and reputation, but also your head. Thats why Mary feels frustrated. Her fathers courtiers may well believe in her divine right to succeed him on the throne, but theyre damned if theyre going to say so out loud.Wolf Hall: The Mirror and the Light continues next Sunday at 9pm on BBC One and iPlayer. Its due to air on PBS Masterpiece in the US in 2025.
    0 Commentaires ·0 Parts ·89 Vue