• The AI talent wars are just getting started
    www.theverge.com
    For my last issue of the year, Im focusing on the AI talent war, which is a theme Ive been covering since this newsletter launched almost two years ago. And keep reading for the latest from inside Google and Meta this week.But first, I need your questions for a mailbag issue Im planning for my first issue of 2025. You can submit questions via this form or leave them in the comments.Its like looking for LeBron JamesThis week, Databricks announced the largest known funding round for any private tech company in history. The AI enterprise firm is in the final stretch of raising $10 billion, almost all of which is going to go to buying back vested employee stock.How companies approach compensation is often undercovered in the tech industry, even though the strategies play a crucial role in determining which company gets ahead faster. Nowhere is this dynamic as intense as the war for AI talent, as Ive covered before.To better understand whats driving the state of play going into 2025, this week I spoke with Naveen Rao, VP of AI at Databricks. Rao is one of my favorite people to talk to about the AI industry. Hes deeply technical but also business-minded, having successfully sold multiple startups. His last company, MosaicML, sold to Databricks for $1.3 billion in 2023. Now, he oversees the AI products for Databricks and is closely involved with its recruiting efforts for top talent.Our conversation below touches on the logic behind Databrickss massive funding round, what specific AI talent remains scarce, why he thinks AGI is not imminent, and more.The following conversation has been edited for length and clarity:Why is this round mostly to help employees sell stock? Because $10 billion is a lot. You can do a lot with that.The company is a little over 11 years old. There have been employees that have been here for a long time. This is a way to get them liquidity.Most people dont understand that this is not going into the balance sheet of Databricks. This is largely going to provide liquidity for past employees, [and] liquidity going forward for current and new employees. It ends up being neutral on dilution because theyre shares that already exist. Theyve been allocated to employees and this allows them to sell those to cover the tax associated with those shares.How much of the rapid increases in AI company valuations have to do with the talent war?Its real. The key thing here is that its not just pure AI talent people who come up with the next big thing, the next big paper. We are definitely trying to hire those people. There is an entire infrastructure of software and cloud that needs to be built to support those things. When you build a model and you want to scale it, that actually is not AI talent, per se. Its infrastructure talent.The perceived bubble that were in around AI has created an environment where all of those talents are getting recruited heavily. We need to stay competitive.Who is being the most aggressive with setting market rates for AI talent?OpenAI is certainly there. Anthropic. Amazon. Google. Meta. xAI. Microsoft. Were in constant competition with all of these companies.Would you put the number of researchers who can build a new frontier model under 1,000?Yeah. Thats why the talent war is so hot. The leverage that a researcher has in an organization is unprecedented. One researchers ideas can completely change the product. Thats kind of new. In semiconductors, people who came up with a new transistor architecture had that kind of leverage.Thats why these researchers are so sought after. Somebody who comes up with the next big idea and the next big unlock can have a massive influence on the ability of a company to win.Do you see that talent pool expanding in the near future or is it going to stay constrained?I see some aspects of the pool expanding. Being able to build the appropriate infrastructure and manage it, those roles are expanding. The top-tier researcher side is the hard part. Its like looking for LeBron James. There are just not very many humans who are capable of that.I would say the Inflection-style acquisitions were largely driven by this kind of mentality. You have these concentrations of top-tier talent in these startups and it sounds ridiculous how much people pay. But its not ridiculous. I think thats why you see Google hiring back Noam Shazeer. Its very hard to find another Noam Shazeer.A guy we had at my previous company that I started, Nervana, is arguably the best GPU programmer in the world. Hes at OpenAI now. Every inference that happens on an OpenAI model is running through his code. You start computing the downstream cost and its like, Holy shit, this one guy saved us $4 billion.You start computing the downstream cost and its like, Holy shit, this one guy saved us $4 billion.Whats the edge you have when youre trying to hire a researcher to Databricks?You start to see some selection bias of different candidates. Some are AGI or bust, and thats okay. Its a great motivation for some of the smartest people out there. We think were going to get to AGI through building products. When people use technology, it gets better. Thats part of our pitch.AI is in a massive growth base but its also hit peak hype and is on the way down the Gartner hype curve. I think were on that downward slope right now, whereas Databricks has established a very strong business. Thats very attractive to some because I dont think were so susceptible to the hype.Do the researchers you talk to really believe that AGI is right around the corner? Is there any consensus of when its coming?Honestly, theres not a great consensus. Ive been in this field for a very long time and Ive been pretty vocal in saying that its not right around the corner. The large language model is a great piece of technology. It has massive amounts of economic uplift and efficiencies that can be gained by building great products around it. But its not the spirit of what we used to call AGI, which was human or even animal-like intelligence.These things are not creating magical intelligence. Theyre able to slice up the space that were calling facts and patterns more easily. Its not the same as building a causal learner. They dont really understand how the world works.You may have seen Ilya Sutskevers talk. Were all kind of groping in the dark. Scaling was a big unlock. It was natural for a lot of people to feel enthusiastic about that. It turns out that we werent solving the right problem.Is the new idea thats going to get to AGI the test-time compute or reasoning approach?No. I think its going to be an important thing for performance. We can improve the quality of answers, probably reduce the probability of hallucinations, and increase the probability of having responses that are grounded in fact. Its definitely a positive for the field. But is it going to solve the fundamental problem of the spirit of AGI? I dont believe so. Im happy to be wrong, too.Do you agree with the sentiment that theres a lot of room to build more good products with existing models, since they are so capable but still constrained by compute and access?Yeah. Meta started years later than OpenAI and Anthropic and they basically caught up, and xAI caught up extremely fast. I think its because the rate of improvement has essentially stopped.Nilay Patel compares the AI model race to early Bluetooth. Everyone keeps saying theres a fancier Bluetooth but my phone still wont connect.You see this with every product cycle. The first few versions of the iPhone were drastically better than the previous versions. Now, I cant tell the difference between a three-year-old phone and a new phone.I think thats what we see here. How we utilize these LLMs and the distribution that has been built into them to solve business problems is the next frontier.ElsewhereGoogle gets flatter. CEO Sundar Pichai told employees this week that the companys drip-drip series of layoffs have reduced the number of managers, directors, and VPs by 10 percent, according to Business Insider and multiple employees I spoke with who also heard the remarks. Relatedly, Pichai also took the opportunity to add being scrappy as a character trait to the internal definition of Googleyness. (Yes, thats a real thing.) He demurred on the most upvoted employee question about whether layoffs will continue, though Im told he did note that there will be overall headcount growth next year.Meta cuts a perk. File this one under sad violin: Im told that, starting in early January, Meta will stop offering free EV charging at its Bay Area campuses. Keep your heads held high, Metamates.What else you should know aboutOpenAI teased its next o3 reasoning model (yes, o2 was skipped) with impressive evals.TikTok convinced the Supreme Court to hear its case just before its US ban is set to take effect. Meanwhile, CEO Shou Chew met with Donald Trump at Mar-a-Lago to (Im assuming) get a sense of what his other options are should TikTok lose its case.More tech-meets-Mar-a-Lago news: Elon Musk inserted himself into the meeting between Jeff Bezos and Trump. Robinhood donated $2 million to Trumps inauguration. And Softbank CEO Masayoshi Son pledged to invest $100 billion into AI tech in the US, which happens to be the same number he has floated for a chip venture to compete with Nvidia.Apple complained about Meta pressuring the EU to make iOS more compatible with third-party hardware. Anyone who has synced photos from the Ray-Ban Meta glasses to an iPhone will understand why this is a battle that is very important for Meta to win, especially as it gears up to release its own pair of AR glasses with a controller wristband next year.Amazon is delaying its return-to-office mandate in some cities because it doesnt have enough office space.Perplexity, which is projected to make $127 million in revenue next year, recently raised $500 million at a valuation of $9 billion. It also acquired another AI startup called Carbon to help it hook into other services, like Notion and Google Docs.Job boardA few notable moves this week:Meta promoted John Hegeman to chief revenue officer, reporting to COO Javier Olivan. Another one of Olivans reports, Justin Osofsky, was also promoted to be head of partnerships for the whole company, including the companys go-to-market strategy for Llama.Alec Radford, an influential, veteran OpenAI researcher who authored its original GPT research paper, is leaving but will apparently continue working with the company in some capacity. And Shivakumar Venkataraman, who was recently brought in from Google to lead OpenAIs search efforts, has also left.Coda co-founder and CEO Shishir Mehrotra will also run Grammarly now that the two companies are merging, with Grammarly CEO Rahul Roy-Chowdhury staying on as a board member.Tencent removed two directors, David Wallerstein and Ben Feder, from the board of Epic Games after the Justice Department said their involvement violated antitrust law.Former Twitter CFO Ned Segal has been tapped to be chief of housing and economic development for the city of San Francisco.More linksMy full Decoder interview with Arm CEO Rene Haas about the AI chip race, Intel, and more.Waymos new report shows that its AV system is far safer than human drivers.The US AI task forces recommendations and policy proposals.Apples most downloaded app of the year was Temu, followed by Threads, TikTok, and ChatGPT.Global spending on mobile apps increased 15.7 percent this year while overall downloads decreased 2.3 percent.If you arent already getting new issues of Command Line, dont forget to subscribe to The Verge, which includes unlimited access to all of our stories and an improved ad experience on the web. Youll also get access to the full archive of past issues.As always, I want to hear from you, especially if you have a tip or feedback. Respond here, and Ill get back to you, or ping me securely on Signal.Thanks for subscribing.
    0 Комментарии ·0 Поделились ·93 Просмотры
  • Hugging Face Releases FineMath: The Ultimate Open Math Pre-Training Dataset with 50B+ Tokens
    www.marktechpost.com
    For education research, access to high-quality educational resources is critical for learners and educators. Often perceived as one of the most challenging subjects, mathematics requires clear explanations and well-structured resources to make learning more effective. However, creating and curating datasets focusing on mathematical education remains a formidable challenge. Many datasets for training machine learning models are proprietary, leaving little transparency in how educational content is selected, structured, or optimized for learning. The scarcity of accessible, open-source datasets addressing the complexity of mathematics leaves a gap in developing AI-driven educational tools.Recognizing the above issues, Hugging Face has introduced FineMath, a groundbreaking initiative aimed at democratizing access to high-quality mathematical content for both learners and researchers. FineMath represents a comprehensive and open dataset tailored for mathematical education and reasoning. FineMath addresses the core challenges of sourcing, curating, and refining mathematical content from diverse online repositories. This dataset is meticulously constructed to meet the needs of machine learning models aiming to excel in mathematical problem-solving and reasoning tasks.The dataset is divided into two primary versions:FineMath-3+: FineMath-3+ comprises 34 billion tokens derived from 21.4 million documents, formatted in Markdown and LaTeX to maintain mathematical integrity.FineMath-4+: FineMath-4+, a subset of FineMath-3+, boasts 9.6 billion tokens across 6.7 million documents, emphasizing higher-quality content with detailed explanations.These curated subsets ensure that both general learners and advanced models benefit from FineMaths robust framework.Creating FineMath required a multi-phase approach to extract and refine content effectively. It started with extracting raw data from CommonCrawl, leveraging advanced tools such as Resiliparse to capture text and formatting precisely. The initial dataset was evaluated using a custom classifier based on Llama-3.1-70B-Instruct. This classifier scored pages based on logical reasoning and the clarity of step-by-step solutions. Subsequent phases focused on expanding the datasets breadth while maintaining its quality. Challenges like the improper filtering of LaTeX notation in earlier datasets were addressed, ensuring better preservation of mathematical expressions. Deduplication and multilingual evaluation further enhanced the datasets relevance and usability.Image SourceFineMath has demonstrated superior performance on established benchmarks like GSM8k and MATH. Models trained on FineMath-3+ and FineMath-4+ showed significant mathematical reasoning and accuracy improvements. By combining FineMath with other datasets, such as InfiMM-WebMath, researchers can achieve a larger dataset with approximately 50 billion tokens while maintaining exceptional performance. FineMaths structure is optimized for seamless integration into machine learning pipelines. Developers can load subsets of the dataset using Hugging Faces robust library support, enabling easy experimentation and deployment for various educational AI applications.Image SourceIn conclusion, Hugging Faces FineMath dataset is a transformative contribution to mathematical education and AI. Addressing the gaps in accessibility, quality, and transparency sets a new benchmark for open educational resources. Future work for FineMath includes expanding language support beyond English, enhancing mathematical notation extraction and preservation, developing advanced quality metrics, and creating specialized subsets tailored to different educational levels.Check out the Collection and Dataset. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence.The post Hugging Face Releases FineMath: The Ultimate Open Math Pre-Training Dataset with 50B+ Tokens appeared first on MarkTechPost.
    0 Комментарии ·0 Поделились ·96 Просмотры
  • Optimizing Protein Design with Reinforcement Learning-Enhanced pLMs: Introducing DPO_pLM for Efficient and Targeted Sequence Generation
    www.marktechpost.com
    Autoregressive protein language models (pLMs) have become transformative tools for designing functional proteins with remarkable diversity, demonstrating success in creating enzyme families like lysozymes and carbonic anhydrases. These models generate protein sequences by sampling from learned probability distributions, uncovering intrinsic patterns within training datasets. Despite their ability to explore high-quality subspaces of the sequence landscape, pLMs struggle to target rare and valuable regions, limiting their effectiveness in tasks like engineering enzymatic activity or binding affinity. This challenge, compounded by the vast sequence space and expensive wet lab validation, makes protein optimization a complex problem. Traditional methods like directed evolution, which iteratively select desired traits, are limited to local exploration and lack tools for steering long-term evolutionary trajectories toward specific biological functions.RL offers a promising framework to guide pLMs toward optimizing specific properties by aligning model outputs with feedback from an external oracle, such as predicted stability or binding affinities. Drawing inspiration from RL applications in robotics and gaming, recent efforts have applied RL techniques to protein design, demonstrating the potential to explore rare events and balance exploration-exploitation trade-offs efficiently. Examples include Proximal Policy Optimization (PPO) for DNA and protein design and Direct Preference Optimization (DPO) for thermostability prediction and binder design. While these studies showcase RLs potential, there remains a need for experimentally validated, publicly available RL frameworks tailored to generative pLMs, which could advance the field of protein engineering.Researchers from Universitat Pompeu Fabra, the Centre for Genomic Regulation, and other leading institutions developed DPO_pLM, an RL framework for optimizing protein sequences with generative pLMs. By fine-tuning pLMs using rewards from external oracles, DPO_pLM optimizes diverse user-defined properties without additional data while preserving sequence diversity. It outperforms traditional fine-tuning methods by reducing computational demands, mitigating catastrophic forgetting, and leveraging negative data. Demonstrating its effectiveness, DPO_pLM successfully designed nanomolar-affinity EGFR binders within hours.The study introduces DPO and self-fine-tuning (s-FT) for optimizing protein sequences. DPO minimizes loss functions, including ranked and weighted forms, with negative log-likelihood proving effective. s-FT refines ZymCTRL iteratively, generating, ranking, and fine-tuning top sequences across 30 iterations. Model training uses Hugging Faces transformers API, employing batch sizes of 4, a learning rate of 810, and evaluation every 10 steps. Structural similarity is assessed using ESMFold and Foldseek, while functional annotations rely on ESM1b embeddings and cosine similarity with CLEAN clusters. EGFR binder design applies fine-tuning on BLAST-retrieved sequences, followed by AlphaFold folding and optimization to enhance binder performance.pLMs generate sequences resembling their training data and often achieve high functionality despite significant sequence deviations. For instance, ZymCTRL, trained on enzyme data with EC labels, created carbonic anhydrases with wild-type activity but only 39% sequence identity. Similarly, generated -amylases outperformed wild-type activity. However, pLMs primarily replicate training set distributions, lacking precise control for optimizing specific properties like activity or stability. By applying RL, particularly methods like DPO, pLMs can be fine-tuned iteratively using feedback from oracles, enabling the generation of sequences with targeted properties while preserving diversity and quality.In conclusion, pLMs excel at sampling from distributions but struggle to optimize specific properties. DPO_pLM overcomes this limitation by utilizing Direct Preference Optimization DPO, which refines sequences through external oracles without additional training data. ZymCTRL evaluations showed rapid and robust performance, enriching enzyme classes and folds in multi-objective tasks. In an EGFR binder design experiment, DPO_pLM achieved a 50% success rate, generating three nanomolar binders after 12 iterations in just hours. Unlike fine-tuning, DPO maximizes preference rewards, improving global predictions efficiently. Future work will focus on integrating DPO_pLM into automated labs for protein design innovations.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Sana Hassan+ postsSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Комментарии ·0 Поделились ·95 Просмотры
  • Is AI Worth the Cost? ROI Insights for CEOs Targeting 2025 Growth
    towardsai.net
    LatestMachine LearningIs AI Worth the Cost? ROI Insights for CEOs Targeting 2025 Growth 0 like December 20, 2024Share this postAuthor(s): Konstantin Babenko Originally published on Towards AI. Source: Image by ImageFlow on Shutterstock74% of companies fail at AI ROI discover what you can do to drive real results.According to a current NTT Data digital business survey, nearly all companies have implemented generative AI solutions, while 83% have created expert or advanced teams for the technology. The Global GenAI Report, spanning respondents within 34 countries and 12 industries, showed that 97% of CEOs expect a material change from generative AI adoption. The same report states that knowledge management, service recommendation, quality assurance, and research and development are the most valuable areas for implementing generative AI.These findings present how generative AI is perceived in a collective sense as the enabler for change. Carlos Galve,Having put a lot of effort into building their AI capabilities, recruiting AI talent, and experimenting with AI pilots, todays CEOs expect ROI from the innovation. Nevertheless, the full realization of AIs potential still presents a challenge. Current research shows that only 26% of companies are equipped with the relevant capabilities to convert AI from proof of concept into value creation (Boston Consulting Group, 2024).This article focuses on the current AI implementation in 2024 and the future trends for 2025 based on the analysis of the latest industry research. The piece will empower CEOs and C-level executives to proactively adapt their business strategies, ensuring they stay ahead of the curve in an increasingly AI-driven marketplace.AI Value DistributionAs per the BCG report, organizations derive as high as 60% of the generative AI value from the core business functions:23% Operations20% Sales and Marketing13% R&D38% Support functions12% Customer service7% IT7% Procurement.It also reveals a wide divergence between industries. Sales and marketing are reported to drive the most value from AI in software, travel and tourism, media, and telecommunications industries. Customer service appears as a prime area where the value of AI usage is tangible in the insurance and banking spheres, whereas consumer goods and retail industries are experiencing massive growth in personalization through AI.Source: Image by SuPatMaN on ShutterstockWhat Separates AI Leaders from the RestThe BCG report covers a major disconnect between AI adoption. Only 4% of companies have cutting-edge AI capabilities that provide major value and another 22% (AI leaders) are reaping big benefits from advanced strategies. On the opposite end of the spectrum, 74% of companies have not yet seen tangible benefits from AI.According to Nicolas de Bellefonds, senior partner at BCG, AI leaders are raising the bar with more ambitious goals. They focus on finding meaningful outcomes on cost and topline, and they focus on core function transformation, not diffuse productivity gains.Lets take a closer look at what makes AI leaders excel:1. Core business focus. Core processes generate 62% of leaders AI value, with leaders optimizing support functions to deliver a broader impact.2. Ambitious goals. By 2027, they plan to invest twice as much in AI and workforce enablement, scale twice as many AI solutions, and generate 60% more revenue growth and 50% more cost reductions.3. Balanced approach. Over half of leaders are using AI to transform the cost of their business and a third are using AI to generate revenue compared to their peers.4. Strategic prioritization. Leaders focus on fewer, higher-impact opportunities to double their ROI and scale twice as many AI solutions as others.5. People over technology. Leaders allocate 70% of resources to people and processes, thus assuring sustainable AI integration.6. Early adoption of GenAI. Generative AI is quickly adopted by leaders emerging as a modern tool for content creation, reasoning, and system orchestration, leading the curve.Results That Speak VolumesOver the past 3 years, AI leaders have demonstrated 1.5x revenue growth, 1.6x shareholder returns, and 1.4x ROI, outperforming their peers. In addition to superior financial performance, they are also crushing in nonfinancial areas such as patent filings and employee satisfaction, demonstrating how their people-first, core-focused strategies are driving transformational outcomes.Challenges Faced in the Process of AI IntegrationAccording to the BCG report, organizations experience different issues with the implementation of AI; among them, 70% are linked to people and processes. The remaining 30% covers such categories as technology (20%) and AI algorithms (10%). The survey underlines that many companies tend to think of themselves as primarily technical organizations while the human aspect is what should not be overlooked if an enterprise wants its AI endeavors to succeed.The Human-Centric GapAI integration is not just about deploying the latest technology; it is about having a workforce that is prepared to accept AI-driven changes. Lack of AI literacy, resistance to change and unclear roles in AI initiatives can often derail progress. The way leaders overcome these challenges is by investing in workforce enablement and training programs as well as building a culture in which data-backed decisions are valued.Technology and AlgorithmsOn the technical side, it is difficult to integrate AI into existing systems, scale solutions across departments and keep data of the right quality. Leaders tackle these issues by strategically prioritizing a few high-value opportunities, with robust infrastructure and data governance practices.Bridging the GapHow well you balance the technical and human parts is key to success in AI integration. Leaders put the wheels in motion for sustainable AI adoption by placing 70% of resources in people and processes, proving that its not just algorithms that unlock AIs potential, but also the technology with human capital and operational processes.Source: Image by SuPatMaN on ShutterstockEnterprise AI Perspective for 2025The role of AI in the enterprise environment will make further progress in 2025 as an influential element of changes in business development strategies and operational activities. Therefore, as technology advances, automation will become complementary to human talent and the way organizations manage human capital will change further. In the future, the primary competitive advantage will not lie in developing or tuning LLMs, but in their applications.Technology complement will be one of the significant trends to be noticed in the adoption of AI because of the need to have human talent plus technology talent in an organization. Instead of outsourcing jobs to robotics, enterprises will look for tools that increase the competency and efficiency of their workers. This approach keeps the tacit knowledge of the employees within the organization as a key resource.Data assets will remain or may even become more important as we move into 2025, as the efficiency of utilizing company-specific information will turn into a competitive advantage. Therefore, organizations need to make their data AI-prepared, which goes through several stages including cleaning, validating, structuring, and checking the ownership of the data set. AI governance software adoption will also be equally important, estimated to have four times more spending by 2030.As the adoption of AI continues to rise, questions about its use, costs and return on investment will also increase. By 2025, a new issue will enter the picture: determining how much more it could cost to expand the use of AI and how much value organizations will be getting from these investments. Solving such issues requires finding new modern frameworks and methodologies, which will supplant already known simple KPIs, and measure customer satisfaction, decision-making, and innovation acceleration.To sum up, the role of AI in the enterprise landscape of 2025 leads to certain challenges, such as workforce augmentation, data asset management, defining cost and ROI, and dealing with disruption.Final ThoughtsFor CEOs navigating the complexities of AI integration, the insights from this article provide a clear takeaway: AI future isnt just about technology, its about leveraging the power of AI to make business value real and meaningful, aligning AI capabilities with human potential.Looking into 2025, leaders will need to think about AI not as a standalone innovation but as an integral part of the driving force of an organizations strategy.There is a wide gap between the leaders and laggards in AI adoption. The difference between leaders and the rest is that they are able to prioritize high-impact opportunities, invest in workforce enablement and treat AI as a tool to drive transformation, not incremental improvement. CEOs should ask themselves:Are we placing bets on AI initiatives directly touching our core business functions? Leaders here get 60% of their AI value, optimizing operations, sales and marketing.Are we ready for AI-driven change in our workforce? To bridge the human-technology gap, resources will continue to be allocated to upskilling employees and developing a data-first culture.Do we have the infrastructure to scale AI solutions effectively? Robust data governance and scalable systems are important because scattered pilots wont yield tangible value.From my experience, enterprise AI deployments show the best results when organizations think of AI adoption as a collaboration of human expertise and technological progress. This requires CEOs to implement a long-term, strategic approach: define ambitious but achievable goals, focus on fewer, high-value AI initiatives, and create a culture open to change.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Комментарии ·0 Поделились ·106 Просмотры
  • Save 30% Off Our Favorite Budget Gaming Chair at Best Buy
    www.ign.com
    As part of its last minute Christmas sale, Best Buy is offering a great deal on our favorite budget gaming chair. Right now, you can pick up a Corsair TC100 Relaxed Chair in Black Leatherette upholstery for30% Off Corsair TC100 Relaxed Gaming ChairCorsair TC100 Relaxed Gaming ChairThe TC100 Relaxed is Corsair's least expensive gaming chair available. The "Relaxed" series offers a broader seat width and minimal bolstering on the sides to fit a wider range of body sizes. This gaming chair can hold up to 264lbs, accommodate heights up to 6' 2" tall, and features a height adjustment range of 45-65cm (21.725.5"). It is available in either fabric or leatherette, (although the leatherette model is a bit more affordable at the moment). Although the chair doesn't have any internal lumbar adjustments, it does include a headrest and lumbar pillow in the package. This chair is reclinable up to 160 degrees and has 2D armrests. It's also backed by a two-year warranty. If you don't want to spend $400 or more on a gaming chair, then the TC100 Relaxed is seriously a gaming chair worth buying.Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Комментарии ·0 Поделились ·95 Просмотры
  • Save 30% Off the Apple AirTags and Get It Delivered Before Christmas
    www.ign.com
    The best Black Friday deal I saw on Apple AirTags is back, and you can even get it before Christmas. Amazon and Best Buy are both offering a four-pack of Apple AirTags keyfinders for only $69.99. That's $30 off the retail price and only $16.50 for each AirTag. This would make an excellent last-minute stocking stuffer gift idea for anyone who owns an iPhone and tends to lose small wearables like wallets, keys, or remotes. Both Amazon and Best Buy can deliver this item before 12/25.4-Pack Apple AirTags for $69.994-Pack Apple AirTagsThe Apple Airtag is a small coin-shaped device that you can put in your wallet or attach to your phones, keys, remote, or anything small enough to be easily misplaced. It works as a little Wi-Fi keyfinder that helps locate your lost objects by pinging its general location to your iPhone using Bluetooth 5.0. However, if your iPhone model has a U1 chip with Ultra Wideband, then you can take advantage of the "Precision Finding" mode. This gives you numerical distance and direction guidance when your lost item is close by. It points you right to it. The CR2032 coin battery is also user-replaceable.Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Комментарии ·0 Поделились ·95 Просмотры
  • Anime Defenders Adds Holiday Tower Defense Goodies With New Christmas Update
    www.ign.com
    Anime-inspired tower defense Roblox experience Anime Defenders just got its Christmas Update, and with it comes holiday maps, a new battle pass, units, and more.Developer Small World Games published the latest patch for its popular take on tower dense strategy for fans across all devices to help celebrate the season. Although December is quickly coming to a close, the update adds more than enough winter-themed distractions to keep players preoccupied for weeks to come. Highlights include various Christmas decorations and maps, hidden presents to find, a new Gold Shop, and Leaderboard Season 5.Those hopping into Anime Defenders after the Christmas Update will first notice the snow, presents, and trees that now pollute the lobby. These are more than just festive trinkets, though, as the added clutter has been used to carefully hide presents that, once acquired, unlock some of those new maps. Completing additional Santa Claus quests can unlock even more goodies, with daily quests also gifting players with rewards should they continue to log in between now and December 26. There's also a limited Christmas Banner, which contains six new units to collect.Anime Defenders Christmas Update is available now.In addition to every new holiday item added as part of the Anime Defenders Christmas Update, players can take advantage of a new trading currency: Emerald. While the introduction of a new currency is exciting, Small World does clarify that most items, with the exception of gifts, are now untradable.Anime Defenders is one of many Roblox experiences choosing to celebrate the holidays with special Christmas updates. Blade Ball launched into its winter plans earlier this month with its Festivities Update. Christmas may only be one week away, but developers will surely continue to add more themed content throughout December. In the meantime, you can see our full list of every active Anime Defenders code here.You can check out the full patch notes from the Christmas Update below.Anime Defenders Christmas Update Patch NotesCHRISTMAS UPDATE IS HERE!This update contains a whole lot of content, including new maps, units, and more!New Limited Christmas Banner!Contains Units:Novice MageNovice DragonSwift AssassinSpirit DemonExorcistCrazed BrawlerUses Snowflake CurrencyYou can convert your Gems and Relics into Snowflakes!New Christmas Maps!Map 1: Frozen Peaks (Unlocked by Completing Day 1 Quest)Chance to obtain map mythic Blade ExpertMap 2: Dark Icy Woods, (Unlocked by Completing Day 4 Quest)Chance to obtain map mythic Swiftblade PrinceMap 3: Skyline District (Unlocked by Completing Day 6 Quest)Chance to obtain map secret Crimson TyrantSanta Claus QuestsDaily Quests with Exciting Rewards!Daily Quests from Day 1 9 (18 December to 26 December)Complete the Day 9 quest for a Christmas Gift!Christmas Gift contains Mythic-Ancient Rewards!New Christmas Battlepass!Unlock the limited secret unit Draconic Warrior!New Unit Skins!Equip limited-time skins on your favorite units!Skins can change the look of your unit in battle and make them stand out!New Christmas Bundles!Added 4 new Christmas bundles in the shop!Each bundle contains exclusive items, currency, and rewards to help you this season.New Christmas Gift Items!Collect Christmas Gifts during the event!Open them to receive rewards like MythicUnits, Secret Units, and more!New Gold ShopSpend your Gold in this everlasting shop!Discounts come and go every day.New Leaderboard Season!Enjoy Season 5 of leaderboards!Happy Grinding!New Trading Changes: EmeraldMost Items (except Gifts) are now untradable.Introducing a new trade currency called Emerald!Merry Christmas, Defenders!Michael Cripe is a freelance contributor with IGN. He started writing in the industry in 2017 and is best known for his work at outlets such as The Pitch, The Escapist, OnlySP, and Gameranx.Be sure to give him a follow on Twitter @MikeCripe.
    0 Комментарии ·0 Поделились ·89 Просмотры
  • Tim Cook says Apple never talked about charging for AI, heres why
    9to5mac.com
    Over the past few months, two big waves of Apple Intelligence features have debuted via iOS 18.1 and iOS 18.2. While some have speculated Apple is planning paid AI services in the future, Apples CEO Tim Cook recently said that charging has never even been discussed. Heres why.Apple views AI as being sort of like multitouchSteven Levy at WIRED interviewed Tim Cook recently about AI and more. One key quote has stuck with me since that interview was published earlier this month.Levy: Some companies charge for AI-enhanced services. Did you consider that?Cook: We never talked about charging for it. We view it sort of like multitouch, which enabled the smartphone revolution and the modern tablet.Straight from the CEO himself, Apple reportedly never talked about charging for AI.His reasoning is interesting, positioning AI as a sort of fundamental new technology akin to multitouch and more.But it also highlights a big difference between Apple and its AI competitors. That difference, I think, points to an even more important reason why monetization wasnt on the table for Apple.Hardware remains Apples big business, bankrolling everything else the company doesWhen multitouch debuted on the first iPhone, it was technically a free feature, yes. But it was inseparably tied to what was then a very expensive new product.Apple Intelligence is very similar.Apple isnt charging for AI, but you also cant use Apple Intelligence unless you first buy a compatible iPhone, iPad, or Mac.Unlike the vast majority of other AI players, Apple has built its primary business around hardware.The company does have an ever-growing services business, but the bulk of its revenue is still tied to hardware. And within that hardware bucket, nothing compares to Apples iPhone revenue.So yes, Apple Intelligence is technically free as part of iOS 18.1 and iOS 18.2. But thats only because Apple first sold you an iPhone.And Tim Cook knows that very well.Apple can give AI away, at no extra charge, because its doing just fine selling us all new iPhones.What do you think of Tim Cooks statement? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии ·0 Поделились ·101 Просмотры
  • All the changes Apple has made to the Photos app since iOS 18
    9to5mac.com
    As part of iOS 18, Apple unveiled the biggest-ever redesign to the Photos app. With a single-pane interface, the new Photos app highlights collections and curation while being fully customizable to your liking. The rollout of the Photos app, however, proved to be polarizing. While Ive been a fan of it since it first debuted, Apple has made several tweaks in subsequent iOS 18 updates to address some of the most common complaints. It has also added several notable new features. iOS 18 betasWhen it first debuted at WWDC in June, the new Photos app design featured an ambitious Carousel interface. This view allowed users to swipe left and right to view highlights that updated each day and featured favorite people, pets, places, and more. However, Apple abandoned this Carousel view entirely with the release of iOS 18 beta 5 in August. This is Apples most significant structural change to the Photos app redesign. iOS 18.1iOS 18.1 was the first major update to iOS 18. it focused primarily on the rollout of Apple Intelligence features, including changes to the Photos app.Photos search lets you find photos and videos simply by describing what youre looking forClean Up removes distractions in your photosMemory movies can be created by describing the story you want to seeiOS 18.1 also included one bug fix for the Photos app: Fixes an issue where videos recorded at 4K 60 while the device is warm could experience stutter while scrubbing the video playback in PhotosiOS 18.2While iOS 18.1 focused on upgrading the Photos with Apple Intelligence, Apple used iOS 18.2 to refine the new design with several key changes. Video viewing improvements, including the ability to scrub frame-by-frame and a setting to turn off auto-looping video playbackImprovements when navigating Collections views, including the ability to swipe right to go back to the previous viewRecently Viewed and Recently Shared album history can be clearedFavorites album appears in the Utilities collection in addition to Pinned CollectionsIve found the ability to swipe to go back to be one of the most useful navigational changes introduced to the Photos app since iOS 18. It makes navigation significantly faster and easier. Wrap upAs I said at the start, Im a big fan of the new Photos app design. While such a dramatic redesign to one of your most-used iPhone apps can be jarring, I think Apple has struck a good balance of customization and curation. The number one tip I give people upset with the new Photos app is to take advantage of the customization options. Open the Photos appScroll all the way down and tap Customize & ReorderPick and choose the specific order of the various options and hide the onesThe new Photos app also includes one of the most useful Apple Intelligence features: Clean Up. Ive found Clean Up to be a powerful and easy way to remove distractions from my pictures, whether its an object or person that has ruined my shot. What do you think of the new Photos app design introduced in iOS 18? Has Apple addressed your concerns with subsequent software updates? Whats still on your list of things you hope to see fixed? Let us know in the comments!My favorite iPhone accessories: Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии ·0 Поделились ·93 Просмотры
  • iPhone 17 Pro: Seven new features are coming next year
    9to5mac.com
    Next year is set to be big for the iPhone, with a compelling new iPhone SE 4 in the spring and the ultra-thin iPhone 17 Air in the fall. But what about Apples flagship Pro models? Here are seven new features coming to the iPhone 17 Pro and Pro Max in 2025.24MP front-facing cameraThe iPhones cameras get better every year, but often Apples big improvements focus on the rear cameras.With the full iPhone 17 line, though, Apple will upgrade the front camera from a 12MP sensor to a brand new 24MP sensor, which should bring big improvements. Its about time for the selfie camera to get closer in quality to the iPhones rear camera.A19 Pro chipApple will once again upgrade its new iPhones with a new and improved A-series chip. In 2025 that will be the A19 Pro chip.Apples very best silicon will be reserved exclusively for the 17 Pro and Pro Max. Though the new 17 Air model is expected to win over a lot of Pro users, it will feature the standard A19 chipjust like the base iPhone 17.12GB RAM, the most ever in an iPhoneThe advent of AI has made RAM in Apples devices increasingly important. iPhone 15 owners know this better than anyone, since RAM limitations made the device incompatible with Apple Intelligence.In 2025, Apple will give its 17 Pro and 17 Pro Max models 12GB of RAMthe most ever in an iPhone.This is a nice 50% improvement from the 16 Pros 8GB of RAM, which is what the standard iPhone 17 and 17 Air will be limited to still.Apples in-house Wi-Fi and Bluetooth chipApple has for years been working on developing two of its own modems: one for 5G, and one for Wi-Fi. Now, both projects are nearing the light of day.The iPhone 17 Pro and Pro Max are expected to use Apples new combo Wi-Fi and Bluetooth chip, and possibly the rest of the lineup too.Only the 17 Air, however, will get Apples in-house 5G modem, with the Pro models sticking with Qualcomms more advanced modem for another year.Larger rectangular camera bumpLike Apple did with this years 16 Pro models, the company is ready to double down on its Pro line as being especially optimized for camera use.The iPhone 17 Pro and Pro Max will reportedly boast a larger rectangular camera bump. Presumably this will enable some new benefit.It sounds like the top portion of the Pro models may feature a full-width aluminum design. The bottom half, however, will retain its standard glass back for wireless charging.Redesigned frame with aluminumThis is an especially curious one: the iPhone 17 Pro and Pro Max are getting a redesigned frame, with aluminum replacing titanium.Apple just recently introduced titanium to the iPhones Pro models in 2023, so the shift to aluminum is interesting. Well have to wait to hear Apples reasoning for the transition.Narrower Dynamic Island on Pro MaxFinally, heres one change exclusive to the iPhone 17 Pro Max: a smaller Dynamic Island than ever before.Details are scarce, but its possible this change could accompany Face ID being embedded in the Pro Maxs display. Nothing concrete has been reported to that effect yet, but one way or another, the high-end model should feature a much narrowed Dynamic Island.iPhone 17 Pro features: wrap-upThis list of iPhone 17 Pro features reiterates my belief that 2025s purchase decision will be harder than ever.The ultra-thin iPhone 17 Air sounds very exciting. But there might just be too many sacrifices Pro users would have to make to switch over.Which iPhone 17 Pro features are you most excited about? Do you plan to buy the 17 Air or stick with the Pro? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии ·0 Поделились ·98 Просмотры