• Apple reaffirms privacy as a tentpole feature in Siri after lawsuit settlement
    appleinsider.com
    While affected customers await their $20, Apple has shared a statement on how it handles user data in regard to Siri, reaffirming that voice recordings aren't being used to sell you Air Jordans.Apple isn't using Siri data for ad revenueApple released Siri in 2011 as the first built-in smart assistant for smartphones. From that moment through to today, the company has asserted that users maintain absolute privacy while using the digital assistant.However, it wouldn't be the United States if stories about contractors hearing conversations via Siri recordings didn't turn into a class action lawsuit. Despite the frivolity of the lawsuit, based around users claiming Apple was selling voice recordings to ad agencies so they would see ads for shoes, it ultimately ended in a settlement. Continue Reading on AppleInsider | Discuss on our Forums
    0 Yorumlar ·0 hisse senetleri ·130 Views
  • Pokemon Presents 2025 Seemingly Leaked Via Pokemon GO Rumour
    gamingbolt.com
    For several years running, The Pokemon Company has premiered Pokemon Presents presentations in the month of February to commemorate Pokemon Day (held on February 27 every year), and it seems like that tradition isnt going to be bucked this year either.Spotted by @mattyoukhana_ on Twitter, digging into mobile titlePokemon GOsfiles has unearthed evidence of an upcoming Pokemon Presents celebrations, with references of Pokmon Presents 2025 x GO Tour: Unova Timed Research spotted in the strings. Thats likely pointing to an upcoming event inPokemon GOthat will be detailed as part of the presentation.Given the fact that the last several years have brought a February Pokemon Presents, 2025 following that same trend doesnt come as a huge surprise. As for what the presentation will bring, thats anyones guess. Presumably,Pokemon Legends: Z-A,which is due out sometime this year, will be the main focus, while The Pokemon Company could also unveil new content for the wildly successfulPokemon Trading Card Game Pocket.Thanks for confirming Pokmon Presents 2025, Niantic: pic.twitter.com/0wjT069t8C Matt (@mattyoukhana_) January 8, 2025
    0 Yorumlar ·0 hisse senetleri ·137 Views
  • Nintendo Switch 2 Will Sell 4.3 Million Units in US in 2025, Analyst Predicts
    gamingbolt.com
    Nintendo has confirmed on multiple occasions that it will officially unveil the successor to the Switch before April 1, and multiple reports and leaks in recent days all seem to be pointing to an announcement being imminent. Amidst the rising anticipation for the new console, Circana analyst Mat Piscatella has chimed in with some interesting sales estimates.Taking to BlueSky, Piscatella has said that as per his projections, the Nintendo Switch 2 (which, as per reports, is what the console is officially going to be called), is expected to sell 4.3 million units in 2025 in the US alone, which will account for roughly a third of all console hardware sales in the US for the year (not counting portable PC devices such as Steam Deck). Piscatella says those projections are based on the console launching in the first half of the year.Interestingly enough, despite the Switch 2s impressive first-year sales in the region, the analyst expects the PS5 to be the highest-selling console of the US for the year of 2025 regardless. He also says that, as far as the Switch 2 is concerned, he expects it to face stock shortages for several months after a significant early demand surge.How long itll be before Nintendo chooses to officially lift the lid on the Switch 2 is the big question on everyones mind right now, but it seems like the wait isnt going to be that long. Stay tuned for more updates in the coming days and weeks.Seeing as how an announcement appears to be coming soon (but who knows) I have Nintendo's next hardware device selling 4.3 million units in the US in 2025 (assuming 1H launch), accounting for approximately 1/3rd of all video game console hardware units sold in the year (excluding PC Portables). Mat Piscatella (@matpiscatella.bsky.social) 2025-01-08T16:09:43.754ZExpecting to see hardware constraints for several months after a significant early demand surge. And units sold will, of course, be dependent upon manufacturing capabilities and will.I still expect PlayStation 5 to rank 1st in overall hardware units sold in the US during the year. Mat Piscatella (@matpiscatella.bsky.social) 2025-01-08T16:11:43.061Z
    0 Yorumlar ·0 hisse senetleri ·135 Views
  • Tomb Raider 4-6 Remastered Will Run at 4K and 60 FPS on PS5 and Xbox Series X
    gamingbolt.com
    The wait for the next mainlineTomb Raidergame continues, but though its yet unknown how much longer that wait is going to drag on, series fans do at least have the opportunity to revisit more beloved classics to look forward to. Hot on the heels of Tomb Raider 1-3 Remastered, handled by Aspyr, Tomb Raider 4-6 Remasteredis on the horizon, promising an enhanced collection of the series second trilogy of titles. Now, more information has been revealed on what to expect from the collection on the technical side of things.Speaking in an interview with GamingBolt, Matthew Ray, brand manager as Aspyr, revealed that on PS5 and Xbox Series X,Tomb Raider 4-6 Remasteredwill run at 4K and 60 FPS. Meanwhile, on Xbox Series S, you can expect it to run at 1440p and 60 FPS. Of course, as a remaster of considerably aged titles, the collection would have been expected to maintain high targets, especially on high-end hardware, so its good to know that it wont disappoint on that front.Tomb Raider 4-6 Remasteredis due out for PS5, Xbox Series X/S, PS4, Xbox One, Nintendo Switch, and PC on February 14, and will retail for $29.99.
    0 Yorumlar ·0 hisse senetleri ·140 Views
  • Apple says Siri isnt sending your conversations to advertisers
    www.theverge.com
    Apple is refuting rumors that it ever let advertisers target users based on Siri recordings in a statement published Wednesday evening describing how Siri works and what it does with data. The section specifically responding to the rumors reads:Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone for any purpose. We are constantly developing technologies to make Siri even more private, and will continue to do so. The conspiracy theory the company is responding to resurfaced last week after Apple agreed to pay $95 million to settle a lawsuit over users whose conversations were captured by its Siri voice assistant and potentially overheard by human employees. While Apples settlement addresses an issue that The Guardian reported in 2019. The report showed human contractors tasked with reviewing anonymized recordings and grading whether the trigger was activated intentionally, would sometimes receive recordings of people discussing sensitive information. But it doesnt include any reference to selling data for marketing purposes. RelatedHowever, reports about the settlement noted that in earlier filings like this one from 2021, some of the plaintiffs claimed that after they mentioned brand names like Olive Garden, Easton bats, Pit Viper sunglasses, and Air Jordans, they were served ads for corresponding products, which they attributed to Siri data. Apples statement tonight says that it does not retain audio recordings of Siri interactions unless users explicitly opt in to help improve Siri, and even then, the recordings are used solely for that purpose. Users can easily opt-out at any time. Facebook responded to similar theories in 2014 and 2016, before Mark Zuckerberg addressed it directly, saying no to the question while being grilled by Congress over the Cambridge Analytica scandal in 2018.So, if Apple (and Facebook, Google, etc.) is telling the truth, then why would you see an ad later for something you only talked about? There are other explanations, and attempts to check the rumors out include an investigation in 2018 that didnt find evidence of microphone spying but did discover that some apps secretly recorded on-screen user activity that they shipped to third parties. Ad targeting networks also track data from people logged onto the same network or who have spent time in the same locations, so even if one person didnt type in that search term, maybe someone else did. They can buy data from brokers who collect reams of detailed location tracking and other info from the apps on your phone, and both Google and Facebook pull in data from other companies to build out profiles based on your purchasing habits and other information.
    0 Yorumlar ·0 hisse senetleri ·127 Views
  • Researchers from SynthLabs and Stanford Propose Meta Chain-of-Thought (Meta-CoT): An AI Framework for Improving LLM Reasoning
    www.marktechpost.com
    Large Language Models (LLMs) have significantly advanced artificial intelligence, particularly in natural language understanding and generation. However, these models encounter difficulties with complex reasoning tasks, especially those requiring multi-step, non-linear processes. While traditional Chain-of-Thought (CoT) approaches, which promote step-by-step reasoning, improve performance on simpler tasks, they often fall short in addressing more intricate problems. This shortcoming stems from CoTs inability to fully capture the latent reasoning processes that underpin complex problem-solving.To tackle these challenges, researchers from SynthLabs and Stanford have proposed Meta Chain-of-Thought (Meta-CoT), a framework designed to model the latent steps necessary for solving complex problems. Unlike classical CoT, which focuses on linear reasoning, Meta-CoT incorporates a structured approach inspired by cognitive sciences dual-process theory. This framework seeks to emulate deliberate, logical, and reflective thinking, often referred to as System 2 reasoning.Meta-CoT integrates instruction tuning, synthetic data generation, and reinforcement learning to help models internalize these reasoning processes. By doing so, it bridges the gap between conventional reasoning methods and the complexities of real-world problem-solving. The framework employs algorithms such as Monte Carlo Tree Search (MCTS) and A* search to generate synthetic data that reflects latent reasoning processes. This data, combined with process supervision, enables models to move beyond simplistic left-to-right token prediction and better approximate the true reasoning pathways required for complex tasks.Key Components and BenefitsMeta-CoT incorporates three main components:Process Supervision: Models are trained on intermediate reasoning steps generated through structured search. This training provides explicit rewards for following reasoning processes, allowing iterative refinement of outputs until a correct solution is reached.Synthetic Data Generation: Using search algorithms like MCTS and A*, researchers generate Meta-CoT traces that mimic the hidden processes behind complex problem-solving. These traces enable models to internalize structured reasoning strategies.Reinforcement Learning: After initial instruction tuning, models undergo reinforcement learning to fine-tune their ability to generate and verify Meta-CoT solutions. This ensures that reasoning aligns with the true data generation processes.This approach enables LLMs to address challenges that traditional CoT cannot, such as solving high-difficulty mathematical reasoning problems and logical puzzles. By formalizing reasoning as a latent variable process, Meta-CoT expands the range of tasks LLMs can handle.Evaluation and InsightsThe researchers evaluated Meta-CoT on demanding benchmarks, including the Hendrycks MATH dataset and Olympiad-level reasoning tasks. The results highlight Meta-CoTs effectiveness:Improved Accuracy: Models trained with Meta-CoT showed a 20-30% improvement in accuracy on advanced reasoning tasks compared to baseline CoT models.Scalability: As problem complexity increased, the performance gap between Meta-CoT and traditional CoT widened, demonstrating Meta-CoTs capacity to handle computationally demanding tasks.Efficiency: Structured search strategies within Meta-CoT reduced inference time for complex problems, making it a practical solution for resource-constrained environments.Experiments revealed that Meta-CoT helps LLMs internalize search processes, enabling self-correction and optimization of reasoning strategies. These capabilities mimic aspects of human problem-solving and mark a significant step forward in LLM development.ConclusionMeta-CoT offers a thoughtful and structured approach to enhancing the reasoning capabilities of LLMs. By modeling latent reasoning processes and incorporating advanced search techniques, it addresses the limitations of traditional CoT methods. The frameworks success in empirical evaluations underscores its potential to transform how LLMs approach complex tasks. As further refinements are made, Meta-CoT is poised to become a foundational element in developing next-generation AI systems capable of tackling intricate reasoning challenges in various domains, from mathematics to scientific discovery.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation IntelligenceJoin this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.The post Researchers from SynthLabs and Stanford Propose Meta Chain-of-Thought (Meta-CoT): An AI Framework for Improving LLM Reasoning appeared first on MarkTechPost.
    0 Yorumlar ·0 hisse senetleri ·143 Views
  • This AI Paper Introduces Virgo: A Multimodal Large Language Model for Enhanced Slow-Thinking Reasoning
    www.marktechpost.com
    Artificial intelligence research has steadily advanced toward creating systems capable of complex reasoning. Multimodal large language models (MLLMs) represent a significant development in this journey, combining the ability to process text and visual data. These systems can address intricate challenges like mathematical problems or reasoning through diagrams. By enabling AI to bridge the gap between modalities, MLLMs expand their application scope, offering new possibilities in education, science, and data analysis.One of the primary challenges in developing these systems is integrating visual and textual reasoning seamlessly. Traditional large language models excel in processing either text or images but fall short when tasked with combining these modalities for reasoning. This limitation hinders their performance in multimodal tasks, particularly in scenarios requiring extended and deliberate thought processes, often termed slow thinking. Addressing this issue is crucial for advancing MLLMs toward practical applications where multimodal reasoning is essential.Current approaches to enhancing reasoning capabilities in MLLMs are rooted in two broad strategies. The first involves using structured search methods, such as Monte Carlo tree search, guided by reward models to refine the reasoning path. The second focuses on training LLMs with long-form reasoning instructions, often structured as chains of thought (CoT). However, these methods have primarily concentrated on text-based tasks, leaving multimodal scenarios relatively underexplored. Although a few commercial systems like OpenAIs o1 model have demonstrated promise, their proprietary nature limits access to the methodologies, creating a gap for public research.Researchers from Renmin University of China, Baichuan AI, and BAAI have introduced Virgo, a model designed to enhance slow-thinking reasoning in multimodal contexts. Virgo was developed by fine-tuning the Qwen2-VL-72B-Instruct model, leveraging a straightforward yet innovative approach. This involved training the MLLM using textual long-thought data, an unconventional choice to transfer reasoning capabilities across modalities. This method distinguishes Virgo from prior efforts, as it focuses on the inherent reasoning strengths of the LLM backbone within the MLLM.The methodology behind Virgos development is both detailed and deliberate. The researchers curated a dataset comprising 5,000 long-thought instruction examples, primarily from mathematics, science, and coding. These instructions were formatted to include structured reasoning processes and final solutions, ensuring clarity and reproducibility during training. To optimize Virgos capabilities, the researchers selectively fine-tuned parameters in the LLM and cross-modal connectors, leaving the visual encoder untouched. This approach preserved the visual processing capabilities of the base model while enhancing its reasoning performance. Further, they explored self-distillation, using the fine-tuned model to generate visual long-thought data, further refining Virgos multimodal reasoning capabilities.Virgos performance was evaluated across four challenging benchmarks: MathVerse, MathVision, OlympiadBench, and MMMU. These benchmarks included thousands of multimodal problems, testing the models reasoning ability over text and visual inputs. Virgo achieved remarkable results, outperforming several advanced models and rivaling commercial systems. For example, on MathVision, Virgo recorded a 38.8% accuracy, surpassing many existing solutions. On OlympiadBench, one of the most demanding benchmarks, it achieved a 12.4% improvement over its base model, highlighting its capacity for complex reasoning. In addition, Virgos text-based fine-tuning demonstrated superior performance in extracting slow-thinking reasoning capabilities compared to multimodal training data. This finding emphasizes the potential of leveraging textual instructions for enhancing multimodal systems.The researchers further analyzed Virgos performance by breaking down results based on difficulty levels within the benchmarks. While Virgo showed consistent improvements in challenging tasks requiring extended reasoning, it experienced limited gains in simpler tasks, such as those in the MMMU benchmark. This insight underscores the importance of tailoring reasoning systems to the complexity of the problems they are designed to solve. Virgos results also revealed that textual reasoning data often outperformed visual reasoning instructions, suggesting that textual training can effectively transfer reasoning capabilities to multimodal domains.By demonstrating a practical and efficient approach to enhancing MLLMs, the researchers contributed significantly to the field of AI. Their work bridges the gap in multimodal reasoning and opens avenues for future research in refining these systems. Virgos success illustrates the transformative potential of leveraging long-thought textual data for training, offering a scalable solution for developing advanced reasoning models. With further refinement and exploration, this methodology could drive significant progress in multimodal AI research.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Nikhil+ postsNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)
    0 Yorumlar ·0 hisse senetleri ·141 Views
  • Want your Apple Watch to stop opening apps and just show the face? Heres how
    9to5mac.com
    Wish your Apple Watch would always show your watch face when you glance at it? Be default, Apple Watch will launch certain apps or show the Smart Stack based on your activity. If you prefer to always see your watch face, however, there are a few things to tweak.Live ActivitieswatchOS 10, the software that runs on the Apple Watch, introduces support for Live Activities. Like on the iPhone, these special widget-style notifications can update in real time without sending individual alerts for sports score updates and each step of your food delivery.If you always want to see your watch face when you glance at your wrist, Live Activities can get in the way. Fortunately, theres a toggle for these:Open Settings app and tap GeneralScroll to Auto-Launch and tap to open sectionTap Live Activities Settings at the top of the list From this section, youre able to enable/disable Live Activities or enable/disable auto-launching Live Activities. Turning off Auto-Launch Live Activities still allows you to view Live Activities when you swipe up or double tap gesture to open the widgets view, but they will stop taking over your watch face when theyre active.A third setting allows you to enable/disable showing Live Activities when your wrist is down and your Apple Watch screen is dimmed. This is turned on by default, but you can change it if you have Live Activities enabled.Separately, theres an option in the Live Activities Settings section to control how media apps behave. You can optionally choose to disable Live Activities for media apps and still auto-launch them for other apps that support Live Activities in the Smart Stack. By default, Auto-Launch for media apps is enabled. If this is on, you can change the current default behavior of auto-launching the Smart Stack widgets view to auto-opening the media app instead. This is how media apps behaved with auto-launch in previous watchOS versions.More auto-launching appsLastly, you can choose to disable Live Activities for certain apps while still auto-launching Live Activities for other apps. This granular level of control is at the bottom of the Live Activities Settings section. Here youll find the ability to disable, auto-launch Smart Stack, or auto-launch the app itself. Supported apps include Alarms, Compass, Mindfulness, Music Recognition, Stopwatch, Timers, Voice Memos, Wallet, and Workout.Theres also a section in Settings > General > Auto-Launch to control how auto-launch works when your Apple Watch is submerged. By default, models with the Depth app will auto-launch it when submerged. You can instead choose to keep your Apple Watch on the watch face. While things could change in the future, grasping these three categories will give you full control over always showing your watch face or auto-launching Live Activities, apps, or snorkeling/diving apps.Best Apple Watch accessories Apple AirPods 4 for the best experience when listening to audio or making calls with Apple WatchThe Anker MagGo Power Bank for Apple Watch charging when youre away from your charger for too longA pzoz Watch Band Organizer Case for elegantly displaying your Apple Watch band collectionThis Milanese Metal Band for getting the nicer look without spending $100 to $200Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Yorumlar ·0 hisse senetleri ·135 Views
  • Apple goes in-depth on its commitment to Siri privacy
    9to5mac.com
    After being hit by a lawsuit over unlawful and intentional recording of Siri interactions, Apple has agreed to pay $95 million in a settlement. Even so, the company has just published an article reaffirming its commitment to privacy and clarifying how Siri works.Apple reaffirms its commitment to Siri privacyIn a post shared on its website for the press, Apple says it is committed to protecting user data and reinforced that the companys products are built from the ground up with privacy technologies. According to Apple, the company has never used Siri data to build marketing profiles and has never offered such data to advertisers.As noted by the company, Siri uses on-device processing when possible, so that requests can be handled offline without the need to send them to Apples servers. For example, when a user asks Siri to read unread messages, or when Siri provides suggestions through widgets and Siri search, the processing is done on the users device, Apple explains.Apple also says that audio of user requests is not shared with Apple unless the user chooses to do so as a way of providing feedback.In some cases, Siri needs to communicate with Apples servers, but the company argues that the requests are made anonymously through a random identifier not associated with the users Apple Account. This process ensures that no one can track the data or identify whos behind the requests. Audio recordings are deleted unless users have chosen to share them with Apple.In the article, Apple also talks about how similar privacy practices apply to Apple Intelligence, which processes most of the data on-device. For Apple Intelligence requests that require access to larger models, Private Cloud Compute extends the privacy and security of iPhone into the cloud to unlock even more intelligence, the company adds.Lawsuit over data collected through SiriThe lawsuit was filed in 2019 and alleged that Apple recorded conversations with Siri without users consent, and that these conversations were then shared with third-party services that led to targeted ads. All of this would be related to the Hey Siri command that requires the device to always be listening with the microphone on.Despite the company reinforcing its commitment to privacy and clarifying that it has made a lot of changes over the years to make Siri even more private and secure, the company has agreed to pay to settle the case. No details are available yet on how to claim your stake of the payout.More information about Apples privacy policies can be found on the companys website.Read also:Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Yorumlar ·0 hisse senetleri ·128 Views
  • Today's NYT Connections Hints, Answers and Help for Jan. 9, #578
    www.cnet.com
    Looking for the most recent Connections answers? Click here for today's Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.I thought the green and purple groups were especially fun in today's Connections. The purple one will have you humming a certain familiar song yes, that's a hint. And Stephen King fans, you might think you see some of that horror master's titles among the clues. You do, but this puzzle never makes it easy. Read on for today's Connections hints and answers.The Times now has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. And players who are registered with the Times Games section can now nerd out by following their progress, including number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.Read more:Hints, Tips and Strategies to Help You Win at NYT Connections Every TimeHints for today's Connections groupsHere are four hints for the groupings in today's Connections puzzle, ranked from the easiest, yellow group to the tough (and sometimes bizarre) purple group.Yellow group hint: You might buy fruit here.Green group hint: Roll over! Shake!Blue group hint: HR is another one.Purple group hint: For spacious skies.Answers for today's Connections groupsYellow group: Vendor's spot at a market.Green group: Dog commands.Blue group: Corporate departments.Purple group: Last words in "America the Beautiful."Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English WordsWhat are today's Connections answers? The completed NYT Connections puzzle for Jan. 9, 2025. NYT/Screenshot by CNETThe yellow words in today's ConnectionsThe theme is vendor's spot at a market. The four answers are booth, stall, stand and table.The green words in today's ConnectionsThe theme is dog commands. The four answers are come, heel, sit and stay.The blue words in today's ConnectionsThe theme is corporate departments. The four answers are finance, IT, legal and sales.The purple words in today's ConnectionsThe theme is last words in "America the Beautiful." The four answers are from, sea, shining and to.
    0 Yorumlar ·0 hisse senetleri ·133 Views