• WWW.ENGADGET.COM
    Android phone makers dropped the ball on Qi2 in 2024
    Android phones have been the first to feature a bunch of notable standards. They were the first to support 4G, 5G, USB-C (way back in 2015 no less) and in-screen fingerprint sensors. And when it comes to wireless charging, you can trace that lineage all the way back to the Samsung Galaxy S3 from 2012 (though the webOS-poweered Palm Pre and its Touchstone charger is the true OG). Unfortunately, when it came to adding support for the Qi2 wireless charging standard to devices in 2024, it feels like Android phone makers were stuck on outdated patch notes.The Qi2 standard was officially announced in early 2023 during CES. We even gave it an award, as the spec looked to bring 15-watt wireless charging (and possibly more in future revisions), improved safety and critically the introduction of Magnetic Power Profiles that make it a cinch to align and attach compatible charging pads. In essence, Qi2 was set to bring the simplicity and ease of use iPhone owners enjoy with MagSafe products to the Android ecosystem.Not a single phone from any of the top three Android phone makers in the US (Samsung, Google and Motorola) offered support for Qi2 in 2024. Photo by Sam RutherfordEven more surprising is that in a rare move for a company that likes keeping its tech siloed neatly inside the walls of its ecosystem, Apple shared core parts of the MagSafe spec with other members of the Wireless Power Consortium (which is the governing body that oversees the Qi and Qi2 standards) to speed up development and interoperability. So youd think after seeing the convenience and popularity of MagSafe accessories among iPhone users, Android phone makers would have rushed out to add Qi2 to as many devices as possible. But nearly two full years after the spec was finalized, the grand total of Android handsets that support Qi2 stands at one: the HMD Skyline.At this point, you might be saying that product development cycles are multi-year processes that are difficult to change prior to launch. And in most cases, youd probably be right. But let's be honest, its not like Samsung, Google, Lenovo and others didnt see this coming. Like Apple, practically all of the big Android phone makers are also members of the WPC, so they would have known about the development of Qi2 long before it was officially announced. On top of that, the first iPhone with MagSafe was the iPhone 12, which came out four years ago. So even if we assume that the first time Samsung, Google et al were presented with the idea of a magnetic wireless charging system was during Apples keynote in the fall of 2020, youd imagine thats still more than enough time to engineer similar technology for use on todays Galaxy and Pixel handsets.The HMD Skyline was the only Android phone to feature Qi2 this year. Photo by Sam RutherfordFor manufacturers, another concern when adopting a new standard is that there may not be enough accessories and other compatible peripherals on sale to make implementation of new tech worth it. Weve seen this in the past with modular phones like the LG G5 and Moto Z Force line and the funky palm-reading tech on the LG G8. However, because Qi2 and MagSafe gadgets are largely interchangeable, theres already a huge market of options like Ankers MagGo line of power banks, which are some of my current favorite portable battery packs.Another annoyance is that some phones like the Razr Plus and Pixel 9 Pro Fold will even stick magnetically to some Qi2 accessories and may even suck down a tiny bit of juice. Unfortunately, this is more of a coincidence caused by the magnets used to help keep foldables open or closed, rather than an intentional use case. This means that even though these devices may appear to support Qi2 at first glance, accessories dont maintain a firm grip and often slide off even in what appear to be ideal circumstances. Even cases that claim to add support for Qi2 are hit or miss, resulting in a poor experience for Android phone owners hoping to recreate the magic of MagSafe on their own. Its really a shame, because it almost feels like with a few small tweaks Google, Moto and others could unlocked Qi2 support on a wider range of devices without a ton of extra effort or cost.The lack of Qi2 support on Android phones is preventing users from enjoying a huge range of handy charging accessories. Photo by Sam Rutherford/EngadgetUnfortunately, while many Chinese phone makers have avoided Qi2 up until this point, thats sort of to be expected with manufacturers like Oppo often favoring proprietary tech like its 65-watt AirVOOC wireless charging instead of more widely accessible industry standard. And because the Galaxy S24 family came out at the very beginning of 2024, Samsung didnt have quite as much time to add Qi2 to its current flagship lineup as Google, which launched the Pixel 9 series just a few months ago. Regardless, this still doesnt explain the general reluctance of OEMs to adopt what Id argue is one of the most meaningful upgrades in accessibility and general usability you can add to a smartphone today.But the most frustrating thing is that six months ago, our friends at CNET pondered why we had yet to see any Qi2 Android phones. And as were nearing the end of the year, theres still only a single model trying to spark hope that 2025 will be different. So kudos to HMD for doing what Samsung, Google et al. couldnt be bothered to figure out. Now Im just worried that if things dont change next year, one of the most promising standards could end up in the graveyard (at least for Android phones) before ever getting a chance to thrive.This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/android-phone-makers-dropped-the-ball-on-qi2-in-2024-191029769.html?src=rss
    0 Comentários 0 Compartilhamentos 15 Visualizações
  • WWW.TECHRADAR.COM
    Chrome could get a massive AI upgrade if this rumor is true
    Google is rumored to be adding Gemini Live to Chrome browser.
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • WWW.TECHRADAR.COM
    Beyonc wins the holidays with an ultra-clever Netflix joke
    How well will Netflix handle Beyonc's halftime show? She has a thought.
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • 0 Comentários 0 Compartilhamentos 15 Visualizações
  • WWW.FASTCOMPANY.COM
    NORAD live tracker: How to follow Santa Claus in real time as he delivers Christmas gifts across the globe
    Hes made a list, hes checked it twice, and now Santa Claus is working his way around the world on his busiest day of the year. And with Amazon workers on strike, and American Airline flights being briefly grounded on Christmas Eve, holiday revelers might just need the extra help to get their Christmas gifts on time. Each year, NORAD, the North American Aerospace Defense Command, provides real-time tracking of Santa and his reindeer starting at 6 a.m. ET on Christmas Eve, when Saint Nick leaves the North Pole and starts his journey on sleigh to deliver hundreds of millions of gifts to children. (As of this writing, Santa was in the Indian Ocean making his way west.)To see where Father Christmas is at any given moment, simply go to NORADs website and click on View Santas route on a 2D map, to look for Santas big red hat icon. Other cool features are a running tally of all gifts hes delivered (2,681,010,867 at this writing), in addition to camera icons along his route that link to a Wikipedia summary of the locale.A brief history of high-tech Santa trackingIn case youre wondering: The tradition dates back to 1955 when a young child mistakenly called a Colorado military command center asking to speak toSanta.According to NORAD, the commander on duty that nightassured the child he was, indeed, Santa. And after more incoming calls, Air Force Colonel Harry Shoup assigned a duty officer to continue monitoring the phones,a tradition that has continued ever since.Wait, can kids still call and speak to Santa?Yes! Today, inquisitive kids and their parents can dial 1-877-HI-NORAD (1-877-446-6723) to speak to Santa. Each year, volunteerstypically received more than 130,000 calls. The more tech savvy, and phone shy, among us can simply track Santa on the NORAD Tracks Santa app, on Instagram, OnStar, Amazon Alexa and SiriusXM.Wishing you a happy holiday from all of us at Fast Company. Now, go put out those cookies for Santahes literally on his way!
    0 Comentários 0 Compartilhamentos 18 Visualizações
  • WWW.FASTCOMPANY.COM
    AI-assisted wildlife surveillance is eavesdropping on endangered spider monkeys
    The endangered Geoffreys spider monkeys that dangle high in the rainforest canopy are elusive and hard for scientists to track.So biologist Jenna Lawson hid 350 audio monitors in trees across Costa Ricas lush Osa Peninsula to spy on them.The devices recorded the sounds of the forest and surrounding countryside for a week, collecting so much data that Lawson could have spent years listening to it all.Instead, she fed it into artificial intelligence systems trained to instantly recognize spider monkey calls and detect where the animals traveled. One of the worlds largest acoustic wildlife studies when Lawson began the project in 2021, it revealed troubling findings about the health of a treasured wildlife refuge.More of this AI-assisted wildlife surveillance is urgently needed as some 28% of all plant and animal species are now at risk of extinction, according to a paper published in the academic journal Science this summer. Researchers from Dutch and Danish universities showed that machine-learning techniques can handle huge amounts of data and uncover sound patterns, allowing for faster, cheaper, and better ecological studies that can aid in biodiversity conservation. But many technical challenges remain.Tech giant Microsofts philanthropic AI for Good Lab announced this month it is hoping to answer some of those technical challenges with a new kind of hardware and computing system for eavesdropping on the planets wildest places.Those remote places are also the most important places on the Earth from a biodiversity perspective, said Microsofts chief data scientist, Juan Lavista Ferres, in an interview last week by video call from Colombia, where a research team was preparing to test the new approach.Powered by the sun and energy-efficient AI computer chips, the devices can run for years rather than weeks without human intervention. And they can regularly transmit their data online via low-Earth orbit satellites. Its called Sparrow, short for Solar-Powered Acoustic and Remote Recording Observation Watch.Pablo Arbelaez, director of an AI-focused research center at the University of the Andes, said a first Sparrow test will happen in a jungle preserve along Colombias largest river, the Magdalena. Eventually, the researchers hope to get a better idea of how deforestationand efforts to reverse itis affecting the population behaviorof jaguars,blue-beaked paujil birds,spider monkeysand other endangered species.Another project closer to Microsoft headquarters will monitor forests in Washington states Cascade Mountains. By late 2025, Lavista Ferres plans to have devices on all continents, from remote corners of the Amazon rainforest to gorilla habitats of the Democratic Republic of the Congo. That will then be open-sourced to make it accessible to a wide body of researchers in real time, but with measures to obscure sensitive location data.What we dont want is these devices to ever be used for poachers to understand where the animals are, Lavista Ferres said.It was a concern about encroachments on Costa Rican spider monkey habitat that led Lawson, then at Imperial College London, to undertake her ambitious bio-acoustic study three years ago. She persuaded landowners to let her place recording devices on their properties outside Corcovado National Park, a jewel of Costa Ricas decades-long efforts to preserve biodiversity by encouraging wildlife tourism.She basically realized the spider monkey is in a really critical situation, said local environmentalist and bug scientist Jim Crdoba-Alfaro. On a follow-up visit last year, he and Lawson trekked across a private reserve with an Associated Press reporter to observe the monkeys and check on the audio monitors.Compared to the charismatic capuchin monkey and the notoriously loud howler monkey both commonly seen or heard throughout Costa Ricaspider monkeys are far more wary of humans and the changes they bring.Theyre the most sensitive of the primates that we have here, said Lawson. The spider monkey would be the first animal to leave when theres signs of trouble. They would be the last animal to come back once forests are restored because they need mature secondary and primary forest to be able to survive.The Royal Society of London in March 2023 published Lawsons findings of what the audio monitors revealed: the spider monkeys werent going anywhere near paved roads or the plantations harvesting palm oil and teak wood that bisect the regions protected national parks. That meant government-designated wildlife corridors meant to extend their range through and beyond the Osa Peninsula were not working as well as designed. She came back to present those conclusions to local officials.After hours of searching, a troop of spider monkeys appearedpeering down at the humans who found them. Within moments, they were on their way againextending their lanky arms and prehensile tails to grasp at trees and propel themselves across the canopy with spidery acrobatics.Unattended acoustic detection of animal sounds is valuable not just in rainforests but in a wide variety of ecosystems, according to the Science paper published earlier this year. For example, it could help sailors avoid colliding their ships with large baleen whales heard to be passing through a shipping channel.Lavista Ferres said there are still numerous challenges to overcome, from humidity that can fray jungle monitors to elephants in African savannas unintentionally knocking them off a tree.Lawson said using the audio monitors to capture the spider monkeys distinctive whinny enables biologists to study a larger area at lower cost, but also provides a truer account of how the monkeys behave without scientists following them around.Were reducing our influence on their behavior, she said. And alsothey dont want us here.Matt OBrien, AP technology writer
    0 Comentários 0 Compartilhamentos 17 Visualizações
  • WWW.FASTCOMPANY.COM
    Biden to decide Nippon Steels bid for U.S. Steel after a panel deadlocks on national security risks
    A powerful government panel on Monday failed to reach consensus on the possible national security risks of a nearly $15 billion proposed deal for Nippon Steel of Japan to purchase U.S. Steel, leaving the decision to President Joe Biden, who opposes the deal.The Committee on Foreign Investment in the United States, known as CFIUS, sent its long-awaited report on the merger to Biden, who formally came out against the deal in March. He has 15 days to reach a final decision, the White House said. A U.S. official familiar with the matter, speaking on condition of anonymity to discuss the private report, said some federal agencies represented on the panel were skeptical that allowing a Japanese company to buy an American-owned steelmaker would create national security risks.Monday was the deadline to approve the deal, recommend that Biden block it or extend the review process.Both Biden and President-elect Donald Trump have courted unionized workers at U.S. Steel and vowed to block the acquisition amid concerns about foreign ownership of a flagship American company. The economic risk, however, is giving up Nippon Steels potential investments in the mills and upgrades that might help preserve steel production within the United States.Under the terms of the proposed $14.9 billion all-cash deal, U.S. Steel would keep its name and its headquarters in Pittsburgh, where it was founded in 1901 by J.P. Morgan and Andrew Carnegie. It would become a subsidiary of Nippon Steel, and the combined company would be among the top three steelmakers in the world, according to 2023 figures from the World Steel Association.Biden, backed by the United Steelworkers, said earlier this year that it was vital for (U.S. Steel) to remain an American steel company that is domestically owned and operated.Trump has also opposed the acquisition and vowed earlier this month on his Truth Social platform to block this deal from happening. He proposed reviving U.S. Steels flagging fortunes through a series of Tax Incentives and Tariffs.The steelworkers union questions if Nippon Steel would keep jobs at unionized plants, make good on collectively bargained benefits or protect American steel production from cheap foreign imports.Our union has been calling for strict government scrutiny of the sale since it was announced. Now its up to President Biden to determine the best path forward, David McCall, the steelworkers president, said in a statement Monday. We continue to believe that means keeping U.S. Steel domestically owned and operated.Nippon Steel and U.S. Steel have waged a public relations campaign to win over skeptics.U.S. Steel said in a statement Monday that the deal is the best way, by far, to ensure that U.S. Steel, including its employees, communities, and customers, will thrive well into the future.Nippon Steel said Tuesday that it had been informed by CFIUS that it had referred the case to Biden, and urged him to reflect on the great lengths that we have gone to to address any national security concerns that have been raised and the significant commitments we have made to grow U. S. Steel, protect American jobs, and strengthen the entire American steel industry, which will enhance American national security.We are confident that our transaction should and will be approved if it is fairly evaluated on its merits, it said in a statement.A growing number of conservatives have publicly backed the deal, as Nippon Steel began to win over some steelworkers union members and officials in areas near its blast furnaces in Pennsylvania and Indiana. Many backers said Nippon Steel has a stronger financial balance sheet than rival Cleveland-Cliffs to invest the necessary cash to upgrade aging U.S. Steel blast furnaces.Nippon Steel pledged to invest $2.7 billion in United Steelworkers-represented facilities, including U.S. Steels blast furnaces, and promised not to import steel slabs that would compete with the blast furnaces.It also pledged to protect U.S. Steel in trade matters and to not lay off employees or close plants during the term of the basic labor agreement. Earlier this month, it offered $5,000 in closing bonuses to U.S. Steel employees, a nearly $100 million expense.Nippon Steel also said it was best positioned to help American steel compete in an industry dominated by the Chinese.The proposedsalecame during a tide of renewed political support for rebuilding Americas manufacturing sector, apresidential campaignin which Pennsylvania was a prime battleground, and a long stretch of protectionistU.S. tariffsthat analysts say has helped reinvigorate domestic steel.Chaired by Treasury Secretary Janet Yellen, CFIUS screens business deals between U.S. firms and foreign investors and can block sales or force parties to change the terms of an agreement to protect national security.Congress significantly expanded the committees powers through the 2018 Foreign Investment Risk Review Modernization Act, known as FIRRMA.In September, Biden issued an executive order broadening the factors the committee should consider when reviewing dealssuch as how they impact the U.S. supply chain or if they put Americans personal data at risk.Nippon Steel has factories in the U.S., Mexico, China, and Southeast Asia. It supplies the worlds top automakers, including Toyota Motor Corp., and makes steel for railways, pipes, appliances and skyscrapers.Josh Boak, Marc Levy and Ashraf Khalil, Associated Press Associated Press writer Fatima Hussein contributed to this report.
    0 Comentários 0 Compartilhamentos 17 Visualizações
  • WWW.YANKODESIGN.COM
    Architecture for Dogs exhibit showcases creative habitat designs for fur babies
    Over the past years, weve seen dogs play a bigger part in their humans lifestyle. Theyre no longer just pets but are already part of families, with their owners calling themselves fur parents. Weve also seen more products in the market for them and not all of them are merely functional. A lot of thought has gone into the designs for some of these products, including dog houses.Designer: Kenya Hara (curator)The Architecture for Dogs exhibition is one such proof of the importance that were giving to our canine friends. Their latest stop is at Milans ADI Design Museum where they show off various ramps, cushions, mats, benches, and of course kennels and shelters that were designed specifically for certain breeds to strengthen their bonds with their humans. These designs are also available to download for free so that users can build their own versions of these architectures and adapt them to their dogs needs. The pieces in the exhibit are pretty interesting and unique. The Cloud was cerated by Reiser + Umemoto as a second skin for a chihuahua to protect the dog from the cold as well as general protection for its bones. It actually looks like a dress but is designed as a climatic buffer. Konstantin Grcic designed a bed for a toy poodle that has a mirror since owners have said their pets respond to mirrors. There is also a sustainable aspect to some of the designs, like Shigeru Bans maze and bed for a papillon or continental toy spaniel since its made from connected cardboard tubes. Since its the exhibits Italian debut, two contributions from local designers were also added. Giulio Iaccheti created a round, plywood-panelled kennel specifically for an Italian greyhound, looking like a tent complete with a red velvet cushion and with a small scarlet flag on top of its house. Piero Lissoni meanwhile crafted a plywood and aluminum kennel for a Yorkiepoo, inspired by of all things, an airport hangar. The post Architecture for Dogs exhibit showcases creative habitat designs for fur babies first appeared on Yanko Design.
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • TOWARDSAI.NET
    TAI 131: OpenAIs o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling
    Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by LouieOpenAI wrapped up its 12 Days of OpenAI campaign and saved the best till last with the reveal of its o3 and o3-mini reasoning models. These models are successors to the o1 series and are debatably the largest step change improvement yet in LLM capabilities on complex tasks for the first time eclipsing human experts in many domains. The o3 release drowned out the otherwise significant launch of Google Geminis 2.0 Flash Thinking Mode model its first reasoning model (in the style of o1/o3) which, unlike OpenAI, doesnt hide its thinking tokens.There is a huge amount to unpack in the o3 release the model sailed past human expert scores on many key advanced benchmarks including coding, mathematics, and PhD science. Perhaps most noteworthy was the breakthrough on the ARC-AGI benchmark (where LLMs have traditionally failed and only achieved average scores even with heavy scaffolding and brute force) for example, o3 (low efficiency) achieved 87.5% vs o1 32% just a week earlier and GPT4o at 5% in May. This score is considered human-level, further fueling debates over whether o3 edges closer to Artificial General Intelligence (AGI). Some of the best scores do come at a huge cost; however o3 on low-efficiency mode (1,024 samples) costs around $3,400 per task costing 160x vs. $20 for o3 high efficiency (6 samples and achieved 75.7%) and vs. ~$3 for o1.On the GPQA Diamond test designed for PhD-level science questions o3 scored 87.7%, compared to the 78% achieved by o1. For context, PhD holders with internet access typically score between 34% (outside their specialty) and 81% (within their domain). In coding, o3s Elo rating of 2727 on Codeforces puts it in the 99.95th percentile of competitive programmers, far exceeding the reach of most human professionals. Mathematics is another area where o3 shines, achieving 96.7% accuracy on the American Invitational Mathematics Exam (AIME), up from o1s 83.3% and just 13.4% for 4o only months earlier.This release didnt only come with a huge cost 1,000x escalation for some tasks but also the promise of huge cost savings! Due to success with model distillation and other techniques, the o3-mini outperforms the much larger o1 model released just last week on many coding and maths tasks. For example, o3-mini with medium compute achieved a much stronger Codeforce Elo in 1997 vs. o1 in 1891, but at what we eyeball as a ~7080% lower total cost.How do the models work? OpenAI still hasnt disclosed that they use reinforcement learning to improve the models reasoning during training. However, employees have posted that they are still just LLMs and use autoregression. We think the model is trained to be highly efficient at chain-of-thought reasoning exploring the most likely paths and realizing when it has made a mistake. We think the rapid progress in just 3 months between o1 and o3 is likely primarily from using synthetic data from o1s full chain of thought thinking tokens to add to the reinforcement learning dataset used for training. On the other hand, we expect the initial o1 mostly used a smaller set of human expert commissioned reasoning examples (which are missing from pre-training because people almost never type out their full internal monologue and reasoning process and instead skip to the answers!). It is also possible that o3 was built using a different, more advanced base foundation model (o1 likely used 4o) perhaps GPT-4.5 or a checkpoint of the rumored Orion or GPT-5 model leading to additional benefits.One interesting note on the new regime of inference time compute scaling is that OpenAI appears to be scaling thinking tokens both in series (up to ~100k reasoning tokens in its context window) but also in parallel with 6 (high efficiency) or 1024 samples (low efficiency) used in the ARC-AGI evaluation. It is unclear how the best answer is chosen from these it could be simple majority voting, but more likely, there is complexity and extra secret sauce here in how the best samples are automatically and rapidly searched, evaluated, and chosen. We think it is possible some form of this parallel scaling could also be taking place in the o1-Pro model available (within the $200/month ChatGPT Pro).OpenAI models rapid breakthroughs on complex benchmarks this year:Source: Towards AI, OpenAI disclosures.The models have not yet been released, and the rollout schedule is still dependent on safety testing. o3-mini is slated for release in late January 2025, with o3 following shortly after. Researchers can apply for early access to test the models, with an application deadline of January 10th, 2025. Pricing has also yet to be announced.Why should you care?So what does this all mean? LLMs can now perform to human expert standards at many tasks and these breakthroughs were achieved at an accelerating pace. Will the inference time compute scaling paradigm continue to deliver new generations every 3 months relative to the 12 years for the training time scaling regime? How will these models perform in the real world beyond their benchmarks? Will o3 models rapidly begin to transform the global economy and disrupt huge numbers of jobs, or is the cost too large a bottleneck to adoption? On which tasks will it be worth spending 170x more compute for incrementally better performance (as with Arc-AGI)? Is this model AGI already? Do you need to find a new career?While we dont think this model is AGI yet (which has wildly differing definitions in any case), we think this model is hugely significant and should be on the front page of all newspapers. It suggests that deep learning and the LLM paradigm dont have any obvious limits. Far from the slowdown and failures of new model generations covered in the media progress is faster than it has ever been on the most complex benchmarks. My key takeaway is that if we can develop a benchmark or generate a few or a few hundred detailed reasoning examples for a task category of human work, we can solve it together with extra synthetic reasoning data. (This doesnt yet apply to physical labor, but AI-based robotics are also rapidly progressing!). The price of o3 will be a large barrier initially but we expect large improvements in the cost and particularly the efficiency of running parallel samples. The o3-mini also appears to be a game changer; however, the huge cost savings will likely come at the cost of more narrow capabilities.To achieve products with high enough reliability and affordability for mass adoption we still think a large amount of work will be needed from LLM Developers to optimize and customize these models to specific industries and niche tasks including gathering industry-specific data, creating reasoning data, and creating your own evaluations. With Google Gemini also joining the reasoning model race this week and with open-source reasoning models from Alibaba Qwen and Deepseek in China, we expect competition to drive affordability and developer customization options for these models. OpenAI has already announced it will release reinforcement learning-based reasoning fine-tuning options, and we think, eventually, there will also be reasoning model distillation options to customize larger models into smaller forms. So there is no better time to convert to become an LLM Developer with our own 80+ lesson Python course and learn to harness these models!Hottest News1. OpenAI Announces OpenAI o3OpenAI announced OpenAI o3, the latest model in its o-Model Reasoning Series. Building on its predecessors, o3 showcases huge leaps in mathematical and scientific reasoning, prompting discussions about its capabilities and constraints.2. xAI Raises $6B Series CElon Musks xAI announced it raised $6 billion in a Series C funding round, bringing its value to more than $40 billion. The company said the funding would be allocated to products and infrastructure, including its Grok AI model and the multibillion-dollar supercomputer site used to train its AI models. The Colossus supercomputer scaled to 100,000 NVIDIA Hopper GPUs in record time and plans to soon add another 100k.3. OpenAI Is Offering 1 Million Free Tokens for GPT-4o and o1A user on X highlighted that OpenAI seems to be offering 1 million free tokens for GPT-4o and o1 if you share your API usage with them for training. Users can get up to 10 million tokens per day on traffic shared with OpenAI on smaller models. This is similar to Google Geminis free tier strategy for its API, where data can be used for training. We think the race for user data has become even more critical given the success of reasoning models where OpenAI could use thinking tokens from user o1 model prompts to expand its reinforcement learning data sets.4. Google Releases Its Own Reasoning AI ModelGoogle has released Gemini 2.0 Flash Thinking Mode, an experimental model trained to generate the thinking process the model goes through as part of its response. Thinking models are available in Google AI Studio and through the Gemini API.5. Microsoft AI Research Open-Sources PromptWizardResearchers from Microsoft Research India have developed and open-sourced PromptWizard, an innovative AI framework for optimizing prompts in black-box LLMs. This framework employs a feedback-driven critique-and-synthesis mechanism to iteratively refine prompt instructions and in-context examples, enhancing task performance. PromptWizard operates through two primary phases: a generation phase and a test-time inference phase.6. The Technology Innovation Institute in Abu Dhabi Released the Falcon 3 Family of ModelsThe UAE government-backed Technology Innovation Institute (TII) has announced the launch of Falcon 3, a family of open-source small language models (SLMs) designed to run efficiently on lightweight, single GPU-based infrastructures. Falcon 3 features four model sizes 1B, 3B, 7B, and 10B with base and instruction variants. According to the Hugging Face leaderboard, the models are already outperforming or closely matching popular open-source counterparts in their size class, including Metas Llama and category leader Qwen-2.5.7. Salesforce Drops Agentforce 2.0Salesforce announced Agentforce 2.0: the newest version of Agentforce, the first digital labor platform for enterprises. This release introduces a new library of pre-built skills and workflow integrations for rapid customization, the ability to deploy Agentforce in Slack, and advancements in agentic reasoning and retrieval-augmented generation (RAG).8. Patronus AI Open Sources Glider: A 3B State-of-the-Art Small Language Model (SLM) JudgePatronus AI has introduced Glider, a general-purpose 3.8B evaluation model. This open-source evaluator model provides quantitative and qualitative feedback for text inputs and outputs. It acts as a fast, inference-time guardrail for LLM systems, offering detailed reasoning chains and highlighting key phrases to enhance interpretability. Glider is built upon the Phi-3.5-mini-instruct base model and has been fine-tuned on diverse datasets spanning 685 domains and 183 evaluation criteria.Five 5-minute reads/videos to keep you learning1. Alignment Faking in Large Language ModelsAlignment faking is where someone appears to share our views or values but is, in fact, only pretending to do so. A new paper from Anthropics Alignment Science team, in collaboration with Redwood Research, provides the first empirical example of a large language model engaging in alignment faking without having been explicitly trained or instructed to do so.2. AI Safety on a Budget: Your Guide to Free, Open-Source Tools for Implementing Safer LLMsThis blog shares some free AI safety tools. It shares everything you need to know, from guardrails that steer chatbots away from disaster to datasets that help identify toxic content. It also provides insights into the AI safety landscape and how to navigate it, especially on a budget.3. Fine-Tuning LLMs for RAGThis video explains why and when you should fine-tune your LLM in a RAG system. This concept is useful for todays AI engineers playing with LLMs.4. The Real Reason Your Companys AI Isnt Working (Hint: Its Not the Technology)The underlying reason many companies struggle to make AI tools work is not the technology itself. The real challenge lies in organizational structures, cultural resistance, a lack of proper training, and insufficient time allocated for exploration. This article presents some thoughts on addressing these issues, such as investing in leadership support, encouraging cultural change, offering tailored training sessions, and fostering an environment of experimentation.5. Introducing ReACT LLM Agents: A Secret to More Capable AIA ReACT agent is a special type of AI agent that uses both Reasoning and Acting to solve the tasks or problems we assign. This article explores this concept, presents use case examples, and explains how it has the potential to make AI more capable.Repositories & ToolsAnthropic Cookbook provides code and guides designed to help developers build with Claude.Genesis is a physics platform for general-purpose robotics/embodied AI/physical AI applications.Picotron is a minimalist repository for pre-training Llama-like models with 4D Parallelism.Helicone is an open-source LLM observability platform.Top Papers of The Week1. Qwen2.5 Technical ReportThis report introduces Qwen2.5, a comprehensive series of LLMs designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has significantly improved during both the pre-training and post-training stages. The pre-training dataset has been scaled from the previous 7 trillion tokens to 18 trillion tokens, and the post-training implements intricate supervised finetuning with over 1 million samples and multistage reinforcement learning.2. Byte Latent Transformer: Patches Scale Better Than TokensThis paper introduces the Byte Latent Transformer (BLT), a new byte-level LLM architecture that matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it.3. Deliberative Alignment: Reasoning Enables Safer Language ModelsThis paper introduces deliberative alignment, a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications. It trains them to reason explicitly about these specifications before answering. Open AI used deliberative alignment to align OpenAIs o-series models, enabling them to use chain-of-thought (CoT) reasoning to reflect on user prompts, identify relevant text from OpenAIs internal policies, and draft safer responses.4. Fully Open Source Moxin-7B Technical ReportThis paper introduces Moxin 7B, a fully open-source LLM developed in accordance with the Model Openness Framework (MOF). The MOF is a ranked classification system that evaluates AI models based on model completeness and openness, adhering to the principles of open science, open source, open data, and open access. Experiments show that the model performs better in zero-shot evaluation than popular 7B models.5. RAGBench: Explainable Benchmark for Retrieval-Augmented Generation SystemsThis paper introduces RAGBench, a comprehensive, large-scale RAG benchmark dataset of 100k examples. It covers five unique industry-specific domains and various RAG task types. RAGBench examples are sourced from industry corpora, such as user manuals, making it particularly relevant for industry applications.6. CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language ModelsThis paper presents an improved version of CosyVoice (streaming speech synthesis model), CosyVoice 2, which incorporates comprehensive and systematic optimizations. It introduces finite-scalar quantization to improve the codebook utilization of speech tokens and streamlines the model architecture to allow direct use of a pre-trained LLM. Additionally, it also uses a chunk-aware causal flow matching model to support various synthesis scenarios.Quick Links1. OpenAI brings ChatGPT to your landline. Call 18002428478, and OpenAIs AI-powered assistant will respond as of Wednesday afternoon. The experience is more or less identical to Advanced Voice Mode. ChatGPT responds to the questions users ask over the phone and can handle tasks such as translating a sentence into a different language.2. Google is expanding Geminis latest in-depth research mode to 40 more languages. The company launched the in-depth research mode earlier this month, allowing Google One AI premium plan users to unlock an AI-powered research assistant.3. GitHub has launched GitHub Copilot Free, an accessible version of its popular AI-powered coding assistant with limits. The new free tier for VS Code aims to expand the AI-powered code completion assistants reach to a broader audience of developers namely, those with only light usage needs and tighter budgets.Whos Hiring in AIApplied AI Finetuning Engineer @Anthropic (Multiple US locations)Generative AI for Test Case Generation Master Thesis Opportunity @IBM (Frankfurt/Germany)Generative AI Engineer @CAI (Remote)AI Strategist @Navy Federal Credit Union (Multiple US locations)New College Grad, Hardware Integration Engineer @Western Digital (San Jose, CA, USA)Software Development Engineer @Siemens Digital Industries Software (New Cairo, Al Qahirah, Egypt)Interested in sharing a job opportunity here? Contact [emailprotected].Think a friend would enjoy this too? Share the newsletter and let them join the conversation.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comentários 0 Compartilhamentos 17 Visualizações
  • TOWARDSAI.NET
    Getting Started With Agentic Workflows
    LatestMachine LearningGetting Started With Agentic Workflows 0 like December 24, 2024Share this postAuthor(s): Omer Mahmood Originally published on Towards AI. Moving beyond AI tools to automating high-value processes!This member-only story is on us. Upgrade to access all of Medium.Image created for free use at ideogram.ai (see Alt text for prompt)Reader Audience []: AI Beginners, familiar with popular models, tools and their applicationsLevel []: Intermediate topic, combining several core conceptsComplexity []: Easy to digest, no mathematical formulas or complex theory hereOne of the hottest topics in AI in recent times are agents. They are essentially the next iteration of LLMs (large language models) that are capable of taking a prompt and then carrying out specific tasks with some understanding or context of the outside world, to achieve some goal, without the need for human supervision.For example, Anthropic recently announced that it had taught its Claude AI model to be able to complete a range of tasks on a computer, such as search the web, open applications and even input text using the keyboard and mouse.Although agents are still in the early stages of whats possible the concept of being able to have a symphony of multiple agents (with different capabilities) collaborating together to complete independent, complex tasks, or workflows doesnt seem too far-fetched.The definition of agentic is used to describe something that exhibits the behaviour of an Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comentários 0 Compartilhamentos 17 Visualizações