• WWW.THEVERGE.COM
    Sonos revenue falls in the aftermath of companys messy app debacle
    Sonos is still trying to climb out from the hole it dug itself earlier this year by recklessly shipping an overhauled mobile app well before the software was actually ready. Today, just a couple weeks after the release of its latest hardware products the Arc Ultra and Sub 4 Sonos reported its fiscal Q4 2024 earnings. And the damage done by the app debacle is clear. Revenue was down 8 percent year over year, which Sonos attributed to softer demand due to challenging market conditions and challenges resulting from our recent app rollout. During the quarter, the company sank $4 million into unspecified app recovery investments. (Sonos previously estimated it could spend up to $30 million to resolve all of the trouble that has stemmed from the rebuilt app.)To date, we have released 16 updates and restored 90 percent of missing features, the company wrote in its earnings presentation. Moving forward, well alternate between major and minor releases. This will allow us to maintain our momentum of making improvements while also ensuring adequate beta testing.CEO Patrick Spence has taken accountability for the app situation, and last month, Sonos announced multiple commitments that it believes will prevent another colossal misstep like this from happening again. Some aspects of the plan are focused on more rigorous testing and greater transparency both inside the company and out. But others, like executives potentially losing out on their annual bonuses, have been mocked by customers as meaningless, half-hearted measures. Do you know more about whats ahead at Sonos? The company is rumored to be working on a video streaming box.As with headphones, Im curious how Sonos plans to differentiate itself in this category. If you have anything to share on whats happening at the company, I can be reached securely (andconfidentially) via Signal at chriswelch.01 or (845) 445-8455.The Sonos flywheel remains strong, as evidenced by the fact that the number of new products per home increased in fiscal 2024, Spence said in todays press release. The company also reported its all-time highest annual market share in home theater, another positive sign at a time when morale among Sonos employees has taken a serious hit.The rebuilt app is in a better place now, which youd hope would be the case after several months of bug fixes and performance enhancements. The mood within Sonos community spaces like the companys subreddit has also improved, with less of the vitriol that felt non-stop (understandably so) from late spring through the early fall. As far as hardware is concerned, Sonos seems to be getting back on track. Early reviews of the Arc Ultra have been largely positive. (Yes, Ill have one coming in the near future.) One early bug with the new soundbar affected Trueplay tuning and, for some customers, resulted in lackluster bass response from a paired subwoofer. Sonos just rectified this issue with a software update that went out earlier today.But some of the companys most loyal customers are still feeling a sense of wariness and frayed trust towards the brand. Sonos next major new product is rumored to be a video streaming box. Im still flummoxed as to just how the company plans to stand out from competitors in that space. But hopefully there wont be another major controversy to derail the product, as was the case with the Sonos Ace headphones.
    0 Kommentare 0 Anteile 80 Ansichten
  • WWW.MARKTECHPOST.COM
    Fixie AI Introduces Ultravox v0.4.1: A Family of Open Speech Models Trained Specifically for Enabling Real-Time Conversation with LLMs and An Open-Weight Alternative to GPT-4o Realtime
    Interacting seamlessly with artificial intelligence in real time has always been a complex endeavor for developers and researchers. A significant challenge lies in integrating multi-modal informationsuch as text, images, and audiointo a cohesive conversational system. Despite advancements in large language models like GPT-4, many AI systems still encounter difficulties in achieving real-time conversational fluency, contextual awareness, and multi-modal understanding, which limits their effectiveness for practical applications. Additionally, the computational demands of these models make real-time deployment challenging without considerable infrastructure.Introducing Fixie AIs Ultravox v0.4.1Fixie AI introduces Ultravox v0.4.1, a family of multi-modal, open-source models trained specifically for enabling real-time conversations with AI. Designed to overcome some of the most pressing challenges in real-time AI interaction, Ultravox v0.4.1 incorporates the ability to handle multiple input formats, such as text, images, and other sensory data. This latest release aims to provide an alternative to closed-source models like GPT-4, focusing not only on language proficiency but also on enabling fluid, context-aware dialogues across different types of media. By being open-source, Fixie AI also aims to democratize access to state-of-the-art conversation technologies, allowing developers and researchers worldwide to adapt and fine-tune Ultravox for diverse applicationsfrom customer support to entertainment.Technical Details and Key BenefitsThe Ultravox v0.4.1 models are built using a transformer-based architecture optimized to process multiple types of data in parallel. Leveraging a technique called cross-modal attention, these models can integrate and interpret information from various sources simultaneously. This means users can present an image to the AI, type in a question about it, and receive an informed response in real time. The open-source models are hosted on Hugging Face at Fixie AI on Hugging Face, making it convenient for developers to access and experiment with the models. Fixie AI has also provided a well-documented API to facilitate seamless integration into real-world applications. The models boast impressive latency reduction, allowing interactions to take place almost instantly, making them suitable for real-time scenarios like live customer interactions and educational assistance.Ultravox v0.4.1 represents a notable advancement in conversational AI systems. Unlike proprietary models, which often operate as opaque black boxes, Ultravox offers an open-weight alternative with performance comparable to GPT-4 while also being highly adaptable. Analysis based on Figure 1 from recent evaluations shows that Ultravox v0.4.1 achieves significantly lower response latencyapproximately 30% faster than leading commercial modelswhile maintaining equivalent accuracy and contextual understanding. The models cross-modal capabilities make it effective for complex use cases, such as integrating images with text for comprehensive analysis in healthcare or delivering enriched interactive educational content. The open nature of Ultravox facilitates continuous community-driven development, enhancing flexibility and fostering transparency. By mitigating the computational overhead associated with deploying such models, Ultravox makes advanced conversational AI more accessible to smaller entities and independent developers, bridging the gap previously imposed by resource constraints.ConclusionUltravox v0.4.1 by Fixie AI marks a significant milestone for the AI community by addressing critical issues in real-time conversational AI. With its multi-modal capabilities, open-source model weights, and a focus on reducing response latency, Ultravox paves the way for more engaging and accessible AI experiences. As more developers and researchers start experimenting with Ultravox, it has the potential to foster innovative applications across industries that demand real-time, context-rich, and multi-modal conversations. Check out the Twitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Upcoming Live LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Kommentare 0 Anteile 86 Ansichten
  • TOWARDSAI.NET
    Whisper Variants Comparison: What Are Their Features And How To Implement Them?
    Author(s): Yuki Shizuya Originally published on Towards AI. Photo by Pawel Czerwinski on UnsplashRecently, I research automatic speech recognition (ASR) to make transcription from speech data. When it comes to an open-source ASR model, Whisper [1], which is developed by OpenAI, might be the best choice in terms of its highly accurate transcription. However, there are many variants of Whisper, so I want to compare their features. In this blog, I will quickly recap Whisper and introduce the variants and how to implement them in Python. I will explain vanilla Whisper, Faster Whisper, Whisper X, Distil-Whisper, and Whisper-Medusa.Table of Contents1. What is Whisper?Whisper [1] is an automatic speech recognition (ASR) model developed by OpenAI. It is trained on 680,000 hours of multilingual and multi-task supervised data, including transcription, translation, voice activity detection, alignment, and language identification. Before the arrival of Whisper, there were no models trained by such a massive amount of data in a supervised way. Regarding architecture, Whisper adopts an Encoder-Decoder Transformer for scalability. The architecture illustration is shown below.Whisper architecture illustration adapted by [1]Firstly, Whisper converts audio data into a log-mel spectrogram. A log-mel spectrogram is a visual representation of the spectrum of signal frequencies in the mel scale, which is commonly used in speech processing and machine learning tasks. For further information, you can check this blog [2]. After Whisper inputs a log-mel spectrogram to some 1-D convolution layers and positional encoding, it processes data in a similar way to the natural language processing Transformer. Whisper can work in the multilingual setting to leverage byte-level BPE tokenizer utilized by GPT-2. Thanks to multi-task learning, Whisper can also perform transcription, timestamp detection, and translation.Official Whisper has six model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Smaller models provide only English-only capability.Whisper size variation tableJust recently (2024/10), OpenAI released the new version, turbo, which has almost the same capability as the large-size model but offers significant speed-up (8 times!) by fine-tuning the pruned large-size model. All Whisper models are compatible with the HuggingFace transformer library.Now, we quickly recap Whisper. It is based on the Encoder-Decoder Transformer architecture and performs outstandingly, even including in commercial models. In the next section, we will discuss the Whisper variants.2. Whisper variants : Faster Whisper, Whisper X, Distil-Whisper, and Whisper-MedusaIn this section, we will go through Whisper variants and their features. I focus on the Python and Pytorch implementations. Although Whisper.cpp and Whisper JAX are popular variants, I will not examine them. Moreover, Whisper-streaming is also a popular variant for real-time inference, but it needs a high-end GPU, so I will not discuss it either. We will check Faster-Whisper, Whisper X, Distil-Whisper, and Whisper-Medusa.Faster-WhisperFaster-Whisper is a reimplementation of Whisper using CTranslate2, which is a C++ and Python library for efficient inference with Transformer models. Thus, there is no change in architecture. According to the official repository, Faster-Whisper can speed up ~4 times faster than the original implementation with the same accuracy while using less memory. Briefly, Ctranslate2 utilizes many optimization techniques, such as weights quantization, layers fusion, batch reordering, etc. We can choose type options, such as float16 or int8, according to our machine type; for instance, when we select int8, we can run Whisper even on the CPU.WhisperX (2023/03)WhisperX [3] is also an efficient speech transcription system integrated Faster-Whisper. Although vanilla Whisper is trained by multiple tasks, including timestamp prediction, it is prone to be inaccurate for word-level timestamps. Moreover, due to its sequential inference nature, vanilla Whisper generally takes computation time for long-form audio inputs. To overcome these weak points, WhisperX introduces three additional stages: (1) Voice Activity Detection (VAD), (2) cut & merge results of VAD, and (3) forced alignment with an external phoneme model to provide accurate word-level timestamps. The architecture illustration is shown below.WhisperX architecture illustration adapted by [3]Firstly, WhisperX processes input audio through the VAD layer. As its name suggests, VAD detects voice segments. WhisperX utilizes the segmentation model in the pyannote-audio library for the VAD. Next, WhisperX cuts and merges the voice detected segmentation. This process allows us to run batch inference based on each cut result. Finally, WhisperX applies the forced alignment to measure word-level accurate timestamps. Lets check a concrete example as shown below.WhisperX algorithm created by the authorIt leverages Whisper for the transcription and the Phoneme model for phoneme-level transcription. The phoneme model can detect a timestamp for each phoneme; thus, if we assign the timestamp from the next nearest phoneme in the Whisper transcript, we can get a more accurate timestamp for each word.Even though WhisperX adds three additional processes compared to the vanilla Whisper, it can effectively transcribe for longer audio thanks to batch inference. The following table shows the performance comparison. You can check that WhisperX keeps low WER but increase the inference speed.Performance comparison of WhisperX adapted by [3]Distil-Whisper (2023/11)Distil-Whisper [4] was developed by HuggingFace in 2023. It is a model that compresses the Whipser Large model using knowledge distillation. It leverages common knowledge distillation techniques to train the smaller model, such as pseudo-labeling from the Whisper Large model and Kullback-Leibler Divergence loss. The architecture illustration is shown below.Distil-Whisper illustration adapted by [4]The architecture is paired with the vanilla Whisper, but the number of layers is decreased. For the dataset, the authors collect 21,170 hours of publicly available data from the Internet to train the Distil-Whisper. Distil-Whisper records 5.8 times faster than the Whisper Large model, with 51% fewer parameters, while performing within a 1% word error rate (WER) on out-of-distribution data. The following table shows the performance comparison.Performance comparison of Distil-Whisper adapted by [4]As you can see, Distil-Whisper keeps a word error rate as low as vanilla Whisper but can decrease the latency.Whisper-Medusa (2024/09)Whisper-Medusa [5] is the variant that utilizes Medusa to increase Whispers inference speed. Medusa is an efficient LLM inference method that adds extra decoding heads to predict multiple subsequent tokens in parallel. You can understand well using the following illustration.Medusa and Whisper-Medusa architecture comparison by the author. Illustrations are adapted from original papers [5][6]In the left part, the Medusa has three additional heads to predict subsequent tokens. If an original model outputs y1 token, the three additional heads predict y2, y3, and y4 tokens. Medusa can increase the number of predictions by adding additional heads and reduce the inference time overall. Note that the necessary VRAM amount will be increased because of additional heads.Whisper-Medusa applies the Medusa idea to Whisper, as shown in the right part. Since Whisper has a disadvantage in inference speed because of the sequential inference nature, Medusas feature helps speed up the inference. The comparison results between Whisper-Medusa and vanilla Whisper are shown below.The performance comparison of Whisper-Medusa adapted by [5]For several language datasets, Whisper-Medusa records a lower word error rate (WER) than vanilla Whisper. It can also speed up 1.5 times on average.In this section, we check the Whisper variants and their features. The following section will explore how to implement them in Python and check their capability for real-world audio.3. Python implementation of Whisper variants : Compare the results using real-world audio dataIn this section, we will learn how to implement Whisper and Whisper variants in Python. For real-world audio data, I will use audio from this YouTube video downloaded manually. The video size is around 14 minutes. I will attach the code on how to convert an mp4 file into an mp3 file later.Environment setupDue to library incompatibility, we created two environments: one for Whipser, Faster-Whisper, WhisperX, and Distil-Whisper, and the other for Whisper-Medusa.For the former environment, I used a conda environment with Python 3.10. I experimented on Ubuntu 20.04 with cuda 12.0, 16 GB VRAM.conda create -n audioenv python=3.10 -yconda activate audioenvNext, we need to install the libraries below via pip and conda. After the installation below, you need to downgrade numpy to 1.26.3.conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidiapip install python-dotenv moviepy openai-whisper accelerate datasets[audio]pip install numpy==1.26.3Next, we need to install whisperX repository. However, whisperX is no longer maintained frequently so far. Thus, we use the forked repository called BetterWhisperX.git clone https://github.com/federicotorrielli/BetterWhisperX.gitcd BetterWhisperXpip install -e .First environment preparation is done.For Whisper-Medusa environment, I used a conda environment with Python 3.11. I also experimented on Ubuntu 20.04 with cuda 12.0, 24 GB VRAM.conda create -n medusa python=3.11 -yconda activate medusaYou need to install the following libraries via pip.pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu118pip install wandbgit clone https://github.com/aiola-lab/whisper-medusa.gitcd whisper-medusapip install -e .All preparation is done. Now, lets check Whisper variants capabilities!How to implement Whisper variants in PythonWhisper turboWe use the latest version of Whisper, turbo. Thanks to the official repository, we can implement vanilla Whisper with only a few lines of code.import whispermodel = whisper.load_model("turbo")result = model.transcribe("audio.mp3")Whisper can only work for audio data within 30 seconds, but transcribe method reads the entire file and processes the audio with a sliding 30-second window, so we dont care about how to feed the data.2. Faster-WhisperWe use the Whisper turbo backbone of Faster-Whisper. Faster-Whisper has the original repository, and we can implement it as follows.from faster_whisper import WhisperModelmodel_size = "deepdml/faster-whisper-large-v3-turbo-ct2"# Run on GPU with FP16model = WhisperModel(model_size_or_path=model_size, device="cuda", compute_type="float16")segments, info = model.transcribe('audio.mp3', beam_size=5)beam_size is used for beam search on decoding. Since the capability of Faster-Whisper is the same as the vanilla Whisper, we can process long-form audio using a sliding window.3. WhisperXWe use the Whisper turbo backbone of WhisperX. Since WhisperX utilizes Faster-Whisper as a backbone, some parts of the codes are shared.import whisperxmodel_size = "deepdml/faster-whisper-large-v3-turbo-ct2"# Transcribe with original whisper (batched)model = whisperx.load_model(model_size, 'cuda', compute_type="float16")model_a, metadata = whisperx.load_align_model(language_code='en', device='cuda')# inferenceaudio = whisperx.load_audio('audio.mp3')whisper_result = model.transcribe(audio, batch_size=16)result = whisperx.align(whisper_result["segments"], model_a, metadata, audio, 'cuda', return_char_alignments=False)WhisperX integrates with Faster-Whisper and adds additional layers that process VAD and forced alignment. We can also process long-form audio more than 30 seconds thanks to the cut & merge.4. Distil-WhisperWe will use the large-v3 models distilled version because the latest turbo version has yet to be released. Distil-Whisper is compatible with the HuggingFace Transformer library, so we can easily implement it.import torchfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipelinedevice = "cuda:0" if torch.cuda.is_available() else "cpu"torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32model_id = "distil-whisper/distil-large-v3"model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)model.to(device)processor = AutoProcessor.from_pretrained(model_id)pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, return_timestamps=True)result = pipe('audio.mp3')pipeline class automatically processes long-form audio using sliding window. Note that this method only outputs the relative timestamps.5. Whisper-MedusaWe use the large model as the Whisper backbone. Following the official implementation, we can implement it as follows:import torchimport torchaudiofrom whisper_medusa import WhisperMedusaModelfrom transformers import WhisperProcessorSAMPLING_RATE = 16000language = "en"regulation_factor=1.01regulation_start=140device = 'cuda'model_name = "aiola/whisper-medusa-linear-libri"model = WhisperMedusaModel.from_pretrained(model_name)processor = WhisperProcessor.from_pretrained(model_name)model = model.to(device)input_speech, sr = torchaudio.load(audio_path)if input_speech.shape[0] > 1: # If stereo, average the channels input_speech = input_speech.mean(dim=0, keepdim=True)if sr != SAMPLING_RATE: input_speech = torchaudio.transforms.Resample(sr, SAMPLING_RATE)(input_speech)exponential_decay_length_penalty = (regulation_start, regulation_factor)input_features = processor(input_speech.squeeze(), return_tensors="pt", sampling_rate=SAMPLING_RATE).input_featuresinput_features = input_features.to(device)model_output = model.generate( input_features, language=language, exponential_decay_length_penalty=exponential_decay_length_penalty,)predict_ids = model_output[0]pred = processor.decode(predict_ids, skip_special_tokens=True)Unfortunately, Whisper-Medusa currently doesnt support long-form audio transcription, so we can only use it for up to 30 seconds audio data. When I checked the quality of the 30-second transcription, it was not as good as other variants. Therefore, I skip its result from the comparison among other Whisper variants.Performance comparison among Whisper VariantsAs I mentioned before, I used around 14 minutes audio file as an input. The following table compares the results of each model.The performance result table from the authorIn summary,Whisper turbo sometimes tends to put the same sentences and hallucinations.Faster-Whisper transcription is almost good, and calculation speed is the best.WhisperX transcription is the best, and it records a very accurate timestamp.Distil-Whisper transcription is almost good. However, it only records relative timestamps.If you can allow subtle mistranscription and dont care about timestamps, you should use Faster-Whisper. Meanwhile, if you want to know the accurate timestamps and transcriptions, you should use WhisperX.WhisperX and Faster-Whisper can get better results than the vanilla Whisper probably because Faster-Whisper has beam search for better inference results, and Whisper X has forced alignment. Hence, they have chance to fix their mistranscription in postprocessing.In this blog, we have learned about Whisper variants architecture and their implementation in Python. Many researchers use various optimization techniques to minimize the inference speed for real-world applications. Based on my investigation, Faster-Whisper and WhisperX keep the capability but succeed in decreasing the inference speed. Here is the code that I used in this experiment.References[1] Alec Radford, Jong Wook Kim, et.al., Robust Speech Recognition via Large-Scale Weak Supervision, Arxiv[2] Leland Roberts, Understanding the Mel Spectrogram, Analytics Vidhya[3] Max Bain, Jaesung Huh, Tengda Han, Andrew Zisserman, WhisperX: Time-Accurate Speech Transcription of Long-Form Audio, Arxiv[4] Sanchit Gandhi, Patrick von Platen & Alexander M. Rush, DISTIL-WHISPER: ROBUST KNOWLEDGE DISTILLATION VIA LARGE-SCALE PSEUDO LABELLING, Arxiv[5] Yael Segal-Feldman, Aviv Shamsian, Aviv Navon, Gill Hetz, Joseph Keshet, Whisper in Medusas Ear: Multi-head Efficient Decoding for Transformer-based ASR, Arxiv[6] Tianle Cai, Yuhong Li, et.al., MEDUSA: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads, ArxivJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Kommentare 0 Anteile 109 Ansichten
  • TOWARDSAI.NET
    Let AI Instantly Parse Heavy Documents: The Magic of MPLUG-DOCOWL2s Efficient Compression
    LatestMachine LearningLet AI Instantly Parse Heavy Documents: The Magic of MPLUG-DOCOWL2s Efficient Compression 0 like November 13, 2024Share this postAuthor(s): Florian June Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Today, lets take a look at one of the latest developments in PDF Parsing and Document Intelligence.In our digital age, the ability to understand documents beyond mere text extraction is crucial. Multi-page documents, such as legal contracts, scientific papers, and technical manuals, present unique challenges.Traditional document understanding methods heavily rely on Optical Character Recognition (OCR) techniques, which present a significant challenge: the inefficiency and sluggish performance of current OCR-based solutions when processing high-resolution, multi-page documents.These methods generate thousands of visual tokens for just a single page, leading to high computational costs and prolonged inference times. For example, InternVL 2 requires an average of 3,000 visual tokens to understand a single page, resulting in slow processing speeds.Figure 1: (a) mPLUG-DocOwl2 achieves state-of-the-art Multi-page Document Understanding performance with faster inference speed and less GPU memory; (b-c) mPLUG-DocOwl2 is able to provide a detailed explanation containing the evidence page as well as the overall structure parsing of the document. Source: MPLUG-DOCOWL2.As shown in Figure 1, a new study called MPLUG-DOCOWL2 (open-source code) aims to address this issue by drastically reducing the number of visual tokens while maintaining, or even enhancing, comprehension accuracy.A Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Kommentare 0 Anteile 119 Ansichten
  • WWW.IGN.COM
    AU Deals: 90% off LEGO Titles, DRM-free RPG Discounts, and Upgraded GTA Trilogies for Dirt Cheap!
    If you highly revere the first three Grand Theft Auto titles, as I did and still do, today's your day to revisit that iffy remaster bundle. Work has been done and they're going for mighty cheap prices now. How much better is the spit shine? Well, I'll let this article/video tell the tale on its own.Today in Wow You're Aging news, I'm celebrating the 21st of cult hit Beyond Good & Evil. Weirdly, most modern gamers probably only know this title through the talk of its ludicrously delayed prequel, a project 16 bloody years in the making. But the fact that Ubisoft can't seem to let that initiative die is a ringing testament to how gamers like me felt about that very special 2003 original. Buy the 20th Anniversary remaster, and you'll get a unique blend of photojournalism meets Wind Waker-esque action-adventuring, stealthing, and semi-open-water exploration.Happy Bday Beyond Good & Evil This Day in Gaming - Daytona USA: CCE (SAT) 1996. eBay- Dragon Ball Z: Budokai (PS2) 2002. eBay- Beyond Good & Evil (PS2) 2003. Redux- Amped 2 (XB) 2003. eBay- WoW: Wrath of the Lich King (PC) 2008. eBay- DuckTales: Remastered (PC) 2013. Get- This War of Mine (PC) 2014. GetTable of ContentsNice Savings for Nintendo SwitchHogwarts LegacyIn almost every way, Hogwarts Legacy is the Harry Potter RPG Ive always wanted to play. Its downright magical. 9/10, amazing.Ring Fit Adventure (-37%) - A$37Oddworld Stranger's Wrath HD (-33%) - A$29.95Batman: Arkham Trilogy (-50%) - $44.97Scribblenauts Showdown (-91%) - $4.94LEGO Skywalker Saga Del. (-80%) - $19.99LEGO DC Super-Villains Del. (-91%) - $9.89LEGO Marvel Super Heroes 2 (-90%) - $8.99LEGO Worlds (-87%) - $6.49LEGO CITY Undercover (-92%) - $7.19Expiring Recent DealsAC: Rebel Collection (-32%) - A$55Just Dance 2020 (-21%) - A$44.58Super Mario Bros. Wonder (-20%) - A$64Borderlands Legendary Col. (-80%) - A$17.99Borderlands: Handsome Col. (-75%) - A$17.48Or gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch Zelda: $629 $509 | Switch Original: $499 $429 | Switch OLED Black: $539 $469 | Switch OLED White: $539 $489 | Switch Lite: $329 $299 | Switch Lite Hyrule: $339 $309See itBack to topPurchase Cheap for PCGOG RPGs AplentyI'm talkin' Fallout 4: GOTY ($21.99 / -60%) | Deus Ex GOTY($1.49 / -86%) | Baldur's Gate II: Enhanced ($8.69 / -70%) | Disco Elysium Final Cut ($14.29 / -75%) | Icewind Dale: Enhanced ($8.69 / -70%) | Anachronox ($1.39 / -86%) and more.See it at GOGWarhammer: Vermintide 2 (-95%) - A$2.09Prey + Rage 2 (-91%) - A$7.44Total War: WARHAMMER III (-65%) - A$35.48The Escapists (78-%) - A$5.93Overcooked 2 (-78%) - A$7.90Expiring Recent DealsShadow Tactics: BotS (-90%) - A$6.10Shin Megami Tensei V (-39%) - A$70.11The Invincible (-55%) - A$20.42Logitech G733 headset (-45%) - A$164Logitech G203 mouse (-37%) - A$43Or just get a Steam Wallet Card.PC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: $649 | Steam Deck 512GB OLED: $899 | Steam Deck 1TB OLED: $1,049See it at SteamBack to topExciting Bargains for XboxTurtle Beach VelocityOneI can vouch for this as it's brilliant for Microsoft Flight Sim and stuff like Ace Combat 7. Not exactly the most advanced flight peripheral but certainly a robust entry point for anybody looking for a more authentic stick jockey experience.Expiring Recent DealsJedi Survivor (-50%) - A$55.29Turtle Beach Stealth Ultra cont. (-$57) - A$2721TB Expansion Card (-$58) - A$242Mortal Kombat 1 (-53%) - A$36Lego Jurassic World (-65%) - A$22.44Or just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box?Series S Black: $549 $513 | Series S White:$499 $449 | Series X: $799 $776 | Series S Starter: N/ASee itBack to topPure Scores for PlayStationDualSense SilverQuite a fetching DualSense is the silver fox. I dont believe Ive ever seen it as low as this, so strike now.GTA Trilogy [Updated](-59%) - A$40.95Dragon Quest III HD-2D Remake (-10%) - A$89Prince Of Persia: TLC (-53%) - A$28FF VII: Rebirth (-46%) - A$65Sand Land (-53%) - A$47Lost Judgment (-60%) - A$39.80Judgment (-47%) - A$34.43Expiring Recent DealsOr purchase a PS Store Card.PS5 Pro Enhanced BargainNeed a cheap Pro showcase title?Ratchet & Clank: Rift Apart - $124.95 / $79Insomniac's Pro Enhancement approach on the Spidey titles is mirrored heretwo modes: "Fidelity Pro" and "Performance Pro." The former targets 30 fps and lets you individually tune the ray tracing features to achieve higher frame rates with the VRR and 120 Hz Display options. My recommended is Performance Pro, as it targets a silky 60 fps and uses PSSR to retain (almost) all of the ray tracing stuff. You will get RT Ambient Occlusion, but not the "High" settings of RT Key Light Shadows (better shadowing at mid-to-extreme distances) or RT Reflections (smoother animations within reflections). Again, I'm all about fluid action that still blows minds like a RYNO 8 to the faceso this is my pick.PlayStation Console PricesWhat you'll pay to 'Station.PS5 Pro $1,199 | PS5 Slim Disc:$799 $759 | PS5 Slim Digital:$679 | PS VR2: $879 | PS VR2 + Horizon: $959 | PS Portal: $329See itBack to topLegit LEGO DealsHarry Potter Advent CalendarTis soon to be the season for opening small cardboard hatches filled with delightful micro-builds and minifigs. Get this cheap. Make it magical.Expiring Recent DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    0 Kommentare 0 Anteile 68 Ansichten
  • WWW.DENOFGEEK.COM
    New Rey Movie Rumors Are Very Worrying for the Future of Star Wars
    Its no secret that the film side of Star Wars has really struggled to get off the ground again following the end of the Sequel Trilogy. Countless Star Wars movie projects have been announced in the last five years, but The Mandalorian & Grogu is the only film thats actually completed production since The Rise of Skywalker. A trilogy of films said to be helmed by The Last Jedi director Rian Johnson was announced and then put on hold indefinitely; new movies to be developed by Game of Thrones creators David Benioff and D.B. Weiss were shelved; Patty Jenkinss exciting Rogue Squadron movie was announced, then shelved, then put back in development; and no one knows what the heck is going on with Taika Waititis movie. Theres apparently a Jedi origin movie coming from Indiana Jones director James Mangold at some point, but nobody seems to know when. In short, things are a mess for Star Wars on the big screen. There was at least a modicum of hope that the upcoming Rey movie, which is rumored to follow the hero as she establishes a new Jedi Order after the events of the Sequel Trilogy, would be the next film to hit the big screen and usher Star Wars movies into a new era. But a series of creative shakeups on the projects, as well as new reports about a potential continuation of the Skywalker saga developed by producer Simon Kinberg, have muddled Reys return, too.The Rey movie has already lost several writers at the development stage, each shakeup delaying the the project just a bit more, with Steven Knight (Peaky Blinders) stepping away from the film in October. Knight was already the second writer on the project, following Damon Lindelof and Justin Britt-Gibson. Director Sharmeen Obaid-Chinoy is still currently attached to direct the movie. But when she and Ridley will actually get to make their film is a big questionand how the movie will fit alongside Kinbergs trilogy plans, which, according to a report from THR, may also use Rey.Theres been a lot of speculation as to whether or not Kinbergs planned Star Wars trilogy is, in fact, another expansion of the central Skywalker Saga or something else entirely, but the fact that were even having this conversation doesnt bode well for the franchises cinematic future or its central star. Lets face it: Lucasfilm doesnt have a great track record when it comes to carving out a clear path for Rey (or any of the Sequel Trilogy characters, for that matter). The Last Jedi and The Rise of Skywalker very overtly clashed with each other in terms of not only Reys identity but also the overarching themes of her journey. Clashing visions hurt Rey on the big screen the last time around. We, of course, dont know for sure that Kinberg is working on his own Rey moviesand if he is, how closely hes coordinating his take with Obaid-Chinoys movie. (Theres always the possibility that Obaid-Chinoys film has now become the first installment of that trilogy, but were speculating here.) But the stops and starts regarding Reys film future are already a little worrying. According to industry insiders via THR, Rey is the most valuable cinematic asset, in some ways maybe the only one, Star Wars has right now, as the Sequel Trilogy closed the book on all of the franchises biggest characters, including Luke, Han, and Leia. Which at least means that well likely get to see Rey again at some point, as long as Daisy Ridley is willing to reprise her role. However, this language is indicative of so many problems with Star Wars and the industry at large right now.By looking at characters and their stories as little more than numbers and dollar signs and nostalgia grabs, were losing a lot of the heart that drew many of us to this franchise in the first place. For many women and young girls, watching Reys journey (at least until The Rise of Skywalker) was and is inspiring. She was a nobody from a desert planet who was able to discover this power within herself and channel it into saving herself and the galaxy from oppression. Before she was a Palpatine or a Skywalker, she was just Rey, and it was easy for us to see ourselves in her and her desire to prove herself.Rey deserves more time on the big screen. She deserves to carve her own path forward and make her own legacy rather than only being a vessel to carry the legacy of others. But its hard to get excited about her potential future when shes being passed around by creatives and executives like the latest toy on Christmas.And whats worse, is this just seems to be the way that Lucasfilm operates now, according to insiders. There are multiple projects in the works that may or may not overlap characters and timelines. Some writers and directors are privy to what their peers are working on, but others arent. Its a different way of development, according to an insider. Theres so much parallel work going on.All of this parallel work feels akin to throwing something at the wall and hoping it sticks. Which might work for some people, but for a franchise as large as this one, doesnt seem to be working thus far. What happens if Reys solo movie takes her in one direction, but Kinberg sees her future differently? Its The Rise of Skywalker all over again. (Seriously, that film is absolutely dreadful.)Rebellions may be built on hope, but the current trajectory of Star Wars movies is far from hopeful. At this point, its hard to even imagine a world where either of these Rey projects, let alone both of them, actually make it to our screens. Star Wars needs to choose a direction and stick with it, for better or for worse. Otherwise were probably not going to see anything outside of the Mandoverse for the foreseeable future.
    0 Kommentare 0 Anteile 90 Ansichten
  • WWW.DENOFGEEK.COM
    Bad Sisters Season 2 Cast: Meet the New Characters
    Irish comedy-drama Bad Sisters is back on Apple TV+. Season one, adapted from Belgian original Clan, was one of the finest shows of 2022 and season two picks up two years after its dramatic conclusion. Joining the Garvey sisters (Eva, Grace, Ursula, Bibi and Becka, played respectively by Sharon Horgan, Anne-Marie Duff, Eva Birthistle, Sarah Greene and Eve Hewson) this time around are a bunch of returning favourites as well as a few newcomers. Find out more about Fiona Shaws tricky Angelica, Thaddea Grahams fresh-faced detective Una and some more familiar faces below. And if you need a recap of what went on last time to jog your memory, we have you covered here.Fiona Shaw as Angelica CollinsThe sister of Roger Muldoon (Grace and JPs former neighbour), Angelicas introduced with the subtitle The Wagon. Irish readers wont need an explanation of that slang term, but for anybody who does, its a derogatory term for an extremely obnoxious, unlikeable and strong-willed woman, and seems to be a fitting one for Angelica.The new character is played by Irish stage and screen actor Fiona Shaw, wholl be known to a generation forever as Petunia Dursley in the Harry Potter film series, but who has a long career in the theatre and in film and television, including recent roles in Killing Eve, True Detective, Baptiste and many more, all the way back to a part in 1989 feature film My Left Foot.Thaddea Graham as Una HoolihanNewly qualified police detective DC Una Hoolihan is a new addition to Loftus investigatory team. Shes played by a Northern Irish actor whos previously had a lead role in Netflix Sherlock Holmes fantasy The Irregulars, as well as playing Bel in Doctor Who: Flux, Sarah in Sex Education and Vivian in BBC Threes Wreck.Owen McDonnell as Ian Reilly (L) & Peter Claffey as Joe Walsh (R)Killing Eve fans will recognise new character Ian for being played by Owen McDonnell, the same actor who played Niko, husband of Sandra Ohs Eve Polastri in the hit assassin thriller. McDonnell also recently played Joe Gargery in Steven Knights adaptation of Great Expectations, Raymond in three episodes of the last series of True Detective, and many more screen roles from Mount Pleasant to Silent Witness.New character Joe Walsh, whos involved with Becka Garvey, is played by Peter Claffey who, coincidentally, also appeared as a different supporting character named Callum in Bad Sisters season one. Claffey will soon be seen in the lead role of Dunk in Game of Thrones spinoff A Knight of the Seven Kingdoms: The Hedge Knight, and previous to this, has appeared in Harry Wild, Vikings: Valhalla, and played professional rugby.Lorcan Craitch as Det. Supt. HowlettLoftuss boss joins Detective Superintendent Howlett joins season two, and is played by Dublin-born actor Lorcan Craitch, who has a long and healthy screen career including the role of DS Jimmy Beck in Cracker, DCS Jackie Twomey in BBC crime drama Bloodlands, Sean in Ballykissangel, Erastes Fulman in HBOs Rome, and many, many more.RETURNING CASTBarry Ward as Fergal LoftusIrish actor Barry Ward is rarely off the screen, with a huge number of roles, recently including that of Thomas Cromwell in Anne Boleyn, Sawyer in Britannia, Barry in Save Me,White Lines and many more. As DI Loftus in Bad Sisters season two, Ward has an expanded role as the investigator looking into the discovery of a dead body that threatens to dig up the Garvey sisters secrets. Michael Smiley as Roger MuldoonGraces kindly, churchgoing neighbour Roger played a small but key role in season one (read our spoiler-filled recap here) as Graces confidante and ally. In season two, hes still suffering from his dealings with JP, and living with his difficult sister Angelica, played by Fiona Shaw. Northern Irish actor Michael Smiley is well-loved for a great many roles, including his unforgettable turn as Tyres in Channel 4 sitcom Spaced, but much more recently Luther, Dead Still, Temple and The Curse. Brian Gleeson & Daryl McCormack as Thomas and Matt ClaffinThe Claffin brothers were hoping to find the Garvey sisters guilty of murder to stop their family insurance firm from having to pay out on JPs hefty claim in season one, but then younger brother Matthew fell for Becka and things got complicated. They return in season two, still played by Brian Gleeson (The Lazarus Project, Frank of Ireland, Peaky Blinders) and Daryl McCormack (The Woman in the Wall, Good Luck to You, Leo Grande, also Peaky Blinders).ALSO RETURNINGAlongside the five Garvey sisters will be Saise Quinn as Graces teenage daughter Blnaid Williams, Yasmine Akrim as Bibis wife Nora Garvey, Jonjo ONeill as Ursulas husband Donal, and Aidan McCann, Kate Higgins and Connor ODonnell as Ursula and Donals kids David, Molly and Michael.Bad Sister season two streams weekly on Wednesdays on Apple TV+. Episode three will land on November 20.
    0 Kommentare 0 Anteile 80 Ansichten
  • WWW.ELDERSCROLLSONLINE.COM
    Elder Scrolls Online Update 44 Brings Updated Battlegrounds, New Companions, and More
    The post Elder Scrolls Online Update 44 Brings Updated Battlegrounds, New Companions, and More appeared first on Xbox Wire.
    0 Kommentare 0 Anteile 120 Ansichten
  • Xbox Introduces New AI Solutions to Protect Players from Unwanted Messages in its Multifaceted Approach to Safety
    As we continue our mission at Xbox to bring the joy and community of gaming to even more people, we remain committed to protecting players from disruptive online behavior, creating experiences that are safer and more inclusive, and continuing to be transparent about our efforts to keep the Xbox community safe.Our fifth Transparency Report highlights some of the ways were combining player-centric solutions with the responsible application of AI to continue amplifying our human expertise in the detection and prevention of unwanted behaviors on the platform, and ultimately, ensure we continue to balance and meet the needs of our growing gaming community.During the period from January 2024 to June 2024, we have focused our efforts on blocking disruptive messaging content from non-friends, and the detection of spam and advertising with the launch of two AI enabled tools that reflect our multifaceted approach to protecting players.Among the key takeaways from the report:Balancing safety and authenticity in messaging: We introduced a new approach to detect and intercept harmful messages between non-friends, contributing to a significant rise in disruptive content prevented. From January to June, a total of 19M pieces of Xbox Community Standards-violating content were prevented from reaching players across text, image, and video. This new approach balances two goals: safeguarding players from harmful content sent by non-friends, while still preserving the authentic online gaming experiences our community enjoys. We encourage players to use the New Xbox Friends and Followers Experience, which gives more control and flexibility when connecting with others.Safety boosted by player reports: Player reporting continues to be a critical component in our safety approach. During this period, players helped us identify an uptick in spam and advertising on the platform. We are constantly evolving our strategy to prevent creation of inauthentic accounts at the source, limiting their impact on both players and the moderation team. In April, we took action on a surge of inauthentic accounts (1.7M cases, up from 320k in January) that were affecting players in the form of spam and advertising. Players helped us identify this surge and pattern by providing reports in Looking for Group (LFG) messages. Player reports doubled to 2M for LFG messages and were up 8% to 30M across content types compared to the last transparency report period.Our dual AI approach: We released two new AI tools built to support our moderation teams. These innovations not only prevent the exposure of disruptive material to players but allow our human moderators to prioritize their efforts on more complex and nuanced issues. The first of these new solutions is Xbox AutoMod, a system that launched in February and assists with the moderation of reported content. So far, it has handled 1.2M cases and enabled the team to remove content affecting players 88% faster. The second AI solution we introduced launched in July and proactively works to prevent unwanted communications. We have directed these solutions to detect Spam and Advertising and will expand to prevent more harm types in the future. Underpinning all these new advancements is a safety system that relies on both players and the expertise of human moderators to ensure the consistent and fair application of our Community Standards, while improving our overall approach through a continuous feedback loop.At Microsoft Gaming, our efforts to drive innovation in safety and improve our players experience also extends beyond the Transparency Report:Prioritizing Player Safety with Minecraft: Mojang Studios believes every player can play their part in keeping Minecraft a safe and welcoming place for everyone. To help with that, Mojang has released a new feature in Minecraft: Bedrock Edition that sends players reminders about the games Community Standards when potentially inappropriate or harmful behavior is detected in text chat. This feature is intended to remind players on servers of the expected conduct and create an opportunity for them to reflect and change how they communicate with others before an account suspension or ban is required. Elsewhere, since the Official Minecraft Server List launched a year ago, Mojang, in partnership with GamerSafer, has helped hundreds of server owners improve their community management and safety measures. This has helped players, parents, and trusted adults find the Minecraft servers committed to the safety and security practices they care about.Upgrades to Call of Dutys Anti-Toxicity Tools: Call of Duty is committed to fighting toxicity and unfair play. In order to curb disruptive behavior that violates the franchises Code of Conduct, the team deploys advanced tech, including AI, to empower moderation teams and combat toxic behavior. These tools are purpose-built to help foster a more inclusive community where players are treated with respect and are competing with integrity. Since November 2023, over 45 million text-based messages were blocked across 20 languages and exposure to voice toxicity dropped by 43%. With the launch of Call of Duty: Black Ops 6, the team rolled out support for voice moderation in French and German, in addition to existing support for English, Spanish, and Portuguese. As part of this ongoing work, the team also conducts research on prosocial behavior in gaming.As the industry evolves, we continue to build a gaming community of passionate, like-minded and thoughtful players who come to our platform to enjoy immersive experiences, have fun, and connect with others. We remain committed to platform safety and to creating responsible AI by design, guided by Microsofts Responsible AI Standard and through our collaboration and partnership with organizations like the Tech Coalition. Thank you, as always, for contributing to our vibrant community and for being present with us on our journey.Some additional resources: Share feedback via theXbox Insiders programor on theXbox Support websiteRead ourXbox Community StandardsLearn about the Xbox Family Settings app anddownload the appwhen youre readyKeep up to speed onPrivacy and Online SafetyRemain informed onHow to Report a PlayerandHow to Submit a Case ReviewDiscover Minecraft Educations immersive learning worlds:Privacy ProdigyCyberSafe: Home Sweet HmmGood GameNeed help?Request a Call, Chat Online, and More
    0 Kommentare 0 Anteile 118 Ansichten
  • THENEXTWEB.COM
    Why learning 10 programming languages doesnt make you a more interesting job candidate
    New data from LinkedIn on the most in-demand jobs on the platform in the third quarter of this year reveals that software engineering is in second place. Just pipped to the post by sales roles, it is clear that software engineering and development pros are in high demand.Additionally, full stack engineers and application developers feature in the top ten in-demand roles at places eight and ten respectively.Software roles are in such high prominence because software powers pretty much everything. According to McKinsey, these days, Every company is a software company.Traditional bricks and mortar businesses are now increasingly digital-first. Think of your bank or your supermarket, for example. The way we use these businesses has radically changed, with services increasingly offered online.5 jobs to discover this weekThe of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Media are software companies now too. Hundreds of workers at The New York Times Tech Guild went on strike the day before the US election. They include data analysts, project managers, and software developers, and make up around 600 of the publications tech employees.These workers create and maintain the back-end systems that power the New York Timesyes, including Wordle. The fact that they not only represent about 10% of the papers total workforce, but are so essential to its operations, is yet another sign of our reliance on software solutions and the people who provide them.McKinsey has established three main reasons why this is the case. Firstly, there is the accelerated adoption of digital products, observed particularly during the pandemic when we did more online than ever before.Secondly, these days, more of the value in products and services is derived from software. Thirdly, the growth of cloud computing, PaaS, low- and no-code tools, and AI-based programming platforms are growing the sector exponentially.Languages to learnIn such a dynamic sector, its no surprise that new programming languages are emerging all the time. Consider Mojo, a language designed to combine the simplicity of Python, with the efficiency of C++ or Rust.Or how about Finch, a new language from MIT thats designed to support both flexible control flow and diverse data structures.Additionally, older languages are having a resurgence, such as Go, and thats because its good for security and AI; both hot-button topics right now.Stack Overflows 2024 Developer Survey highlighted JavaScript, HTML/CSS, and Python as the top three languages respondents had used for extensive development work over the past year.Additionally, the US White House Office of the National Cyber Director (ONCD) issued a recent report advising that programmers should move to memory-safe languages.Given all that, it is understandable if as a developer, youre really not sure what languages you should be using, what you should learn, and what you can think about dropping.Broad v specificDoes this mean you should be aiming to become proficient in up to ten languages? A recent Reddit thread discussed just that, with one user arguing, There is absolutely no point of learning 10 languages; just pick two, pick a specific field, and become the best at it.Others agreed, with one contributor saying, people are fixated on finding the hottest new language, the hottest new tech stack, or the latest trends, but this is not gonna help you.Another user pointed out that Specialisation is good but you should have a general understanding of the type of languages and how they work, then you can learn new languages and tech stack easily.For many developers, good foundational knowledge is more important (and more valuable to their long-term career) than having a laundry list of programming languages on their CV that they may only be semi-proficient in.Learning a stack on YouTube and building toy projects is easy, pointed out another thread contributor. Building specialisation takes a lot more effort and many years of real life experience.If you do decide to specialise in a couple of languages, that should be, at least in part, influenced by what you enjoy doing most.Do what you think is good for you, says a thread contributor. Once you become really good, youll automatically stand out from the crowd by being better than 90% of the mediocre developers. Wise advice.Ready to find your next programming role? Check out The Next Web Job Board Story by Kirstie McDermott Get the TNW newsletterGet the most important tech news in your inbox each week.Content provided by Amply and TNWAlso tagged with
    0 Kommentare 0 Anteile 85 Ansichten