• The Morning After: Buying a good graphics card is an expensive mess
    www.engadget.com
    Its been a weird time to dip into graphics cards, GPUs and another synonym for the GeForces and Radeons of this world.AMD has tried for a while to undercut NVIDIA with slightly cheaper but less capable video cards but this time, with the Radeon 9070 and 9070 XT, it might have got the recipe right, especially in 4K and ray tracing performance.Devindra Hardawar says the $599 Radeon 9070 XT, in particular, is a solid midrange GPU with excellent support for 1440p gaming and a bit of 4K. It has better ray tracing support than before, its faster than the plain Radeon 9070 and it finally has AI upscaling built in too. Not to mention, NVIDIAs similarly priced GPUs landed around the same time.Its a good strategy better than fighting with NVIDIA at the extreme high end of GPUs. It makes more sense for AMD to focus on cards people can actually buy if you can.Its a good time to look closer at that too. Buy. Hah! The gaming PC makers and people who need high-powered machines for their work know this already its a mess.Not only is it impossible to find NVIDIAs 50 series GPU in stock, but as Igor Bonifacic noted, nearly every single model is way above NVIDIAs suggested price. This isnt a pandemic thing anymore, this isnt a crypto thing anymore (although thats stoked demand, of course).Its like Taylor Swift tickets or a PS5 disc drive when the PS5 Pro broke cover its scalpers and opportunism from the middle-man companies that make the majority of GPUs out there. Mat SmithAMD Radeon RX 9070 and 9070 XT reviewNVIDIA GeForce RTX 5070 reviewThe GPU market has broken foundationsGet this delivered direct to your inbox. Subscribe right here!The biggest tech stories you missedSorry Were Closed reviewVolkswagen previews its 20,000 EV for everyoneMSI Claw 8 AI+ review: This cat got its bite backHouse Republicans subpoena Google over alleged censorshipTechnics AZ100 review: Supreme sound quality and a unique Bluetooth toolThe Return of... Ask Engadget!EngadgetIs there a robot vacuum that wont destroy phone cables? How is US trade policy going to affect the price of my next phone? Do I need another phone? Ask Engadget returns, with an entirely new email address: askmat(AT)engadget.com.Ask me something!Nothings Phone 3a Pro is cheap, capable and looks stylishOnly $459.EngadgetA sub-$500 smartphone that Engadget can endorse is a rare feat, but Nothing might have nailed it. Despite a premium Nothing Phone 3 not even existing, the companys see-thru phone series shoots straight for the cheap midrange. Many of the specs, like periscope zoom, a 120Hz 6.77-inch screen and a huge 5,000mah battery are typically in phones that are several hundred dollars more. Its all wrapped in a design full of character too. Check out my first impressions and expect a full review very soon.Continue reading.Apple unveils the M4 MacBook Air, with a price dropIt starts at $999.An upgraded laptop with a price drop? In this economy? The new MacBook Air, with an M4 chip, will retail at $999, down $100 from the previous starting price. There are still two size choices: 13-inch and 15-inch. RAM for the M2 and M3 laptops is 16GB by default, and the M4 model matches that standard. Apple is promising up to 18 hours of battery life, and the Airs will have support for Apple Intelligence. Theres also a new look in the lineup, with a sky blue color adding a new option beside the usual shades of gray.Continue reading.The best action cameras for 2025All the top models from GoPro, DJI and Insta360.Engadget has been testing action cameras for more than 16 years and with that experience, we can help you find the right model for your budget and needs. In the past, GoPro was the go-to choice for first-person action filming, whether its surfing, rock climbing or offroading. But now, you have more choice, with models also available from DJI and Insta360. We break down all the different form factors and our best choices.Continue reading.This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-121555319.html?src=rss
    0 Commentarii ·0 Distribuiri ·65 Views
  • 0 Commentarii ·0 Distribuiri ·63 Views
  • Budget gamers rejoice as Nvidia RTX 5050 and RTX 5060 are rumored to launch in April
    www.techradar.com
    A new rumor circulating online suggests the RTX 5060 and RTX 5050 will launch next month
    0 Commentarii ·0 Distribuiri ·67 Views
  • How the DEI backlash could impact pay equity efforts, according to experts
    www.fastcompany.com
    Despite the decades-long precedent set by the Equal Pay Actwhich prohibits sex-based wage discriminationand similar laws at the state level, true pay equity has remained elusive. The gender pay gap actually increased in 2023 for the first time in 20 years, with women earning 83 cents on the dollar compared to men.Even amid vocal pushback from the business community, pay transparency has emerged as a tool to help promote equal pay, by empowering workers who have historically been undercompensated or at a disadvantage during hiring negotiations. Across 14 states, employers are now required to provide clear pay ranges in job listings or directly share that information with candidates during the hiring process.As private sector companies have been forced to comply with these new lawsand have sought to demonstrate their commitment to diversity, equity, and inclusionmany of them have invested in pay equity audits to ensure their workers are being paid fairly (and to protect against potential legal claims). According to the Society for Human Resource Management, three in four employers now conduct regular pay equity audits.Numbers dont lie, says Melanie Naranjo, the chief people officer at HR compliance platform Ethena. If you think youve got fair and equitable processes in place that make decisions based solely on merit, but your numbers show that women make less money than their male counterparts for the same role, the fact of the matter is: Youve got an equity issue, and your company isnt as merit-based as you thought.Given the current political climate, however, its possible the anti-DEI measures that have been championed by conservative activistsand now the federal governmentcould set back pay equity efforts in the workplace. Trumps executive orders have ushered in some consequential changes to the level of oversight that the government has historically exercised over federal contractors, which required that those companies do an annual pay analysis to ensure compliance with antidiscrimination laws.By revoking a 1965-era executive order, which was originally intended to prevent discrimination in federal contracting, Trump has effectively undone those reporting requirements for a broad swath of companies that do business with the government. According to the Center for American Progress, that means 36 million workers could lose out on protections against employment discrimination.The impact on pay equity effortsFor large employers who operate across many states and must comply with a range of pay-related laws, these changes may not carry as much weight. But experts caution that they could give cover to employers that were only conducting audits to comply with the lawor those looking for an excuse to stop investing in pay equity efforts. What we are worried about is people pausing on pay equity workbecause pay equity work is something you should do anyway, says Rob Porcarelli, the chief legal officer at pay transparency solutions company Syndio. Its quintessentially anti-discriminatory.Beyond the impact on federal contractors, some fear that the growing DEI backlashand the emphasis on merit being the sole consideration in hiring decisions and career progressioncould discourage companies from taking a deeper look at how and why they might be perpetuating pay discrepancies. Companies may very well stop auditing for problematic trends across their employee demographics, Naranjo says. When problematic trends inadvertently get surfaced to them, instead of trying to root out the underlying issues, theres a very real risk that companies will dismiss them under the guise of building a meritocracy.Porcarelli adds that pay equity can be perceived as something that largely benefits womenand while its true that womenare more likely to be underpaidrelative to their male peers, these programs are intended to address disparities that can impact all kinds of employees, particularly those who are underrepresented. Some frame pay equity as a womens issue, and so it gets swept under the umbrella of DEI, he says.What employers should doWhile the Trump administration does not necessarily have the legal authority to curb DEI efforts in the private sector, federal contractors are in a more vulnerable position, especially given the vague language of the executive orders. What is illegal DEI activity? Porcarelli says. Its not defined. As Fast Company has reported, some companies have responded by pausing or reevaluating DEI programs that might be considered a violation of the executive ordersbut according to Porcarelli, employers are largely finding that theres little legal risk in pursuing pay equity initiatives. Most are concluding [that] conducting pay equity analysis is not problematic because youre not analyzing pay only in favor of one group, he says. Youre ensuring that pay is not affected with gender or race.Some employers, on the other hand, might arrive at a different conclusionthat theyre less likely to be targeted right now for falling short of equal pay laws. But as DEI experts have pointed out, that approach can open companies up to plenty of other legal challenges and discrimination claimssome of which have been kept at bay through diversity initiatives.Aside from the expensive disruptions to productivity when employees realize theyre being paid less than their counterparts for the same exact work, [there are] the costs of having to replace high performers who quit, delayed projects as the leadership team and HR lose time running damage control, and losing actual business deals as company optics take a turn for the worse, Naranjo says. Theres also the incredible cost of managing lawsuits and internal allegations when employees inevitably file discrimination claims.Kara Govro, principal legal analyst at compliance platform Mitratech, argues that given the long-standing precedent set by both federal law and legislation at the state level, pay equity should not really be considered part of the DEI concept that is experiencing backlash. There are, of course, stronger protections in place to prevent gender-based pay disparities, between the Equal Pay Act and state laws that have explicitly secured those protections; for workers, that also means it is easier to bring a pay discrimination claim on the basis of gender.Still, Govro believes the trend of pay transparency is a crucial tool to hold companies accountable for pay discrepancies in their workforce, given the states with laws on the books are also home to some of the largest employers in the country. At a minimum, companies have to do some pay analysis in order to share accurate compensation details in their job listings. It almost serves as a mini audit, just to be forced to make that pay range [public], she says.Weve just got these layers of laws that arent going anywhere, she adds. Trumps opinion on DEI does not impact the ability to bring a claim under Title VII or the Equal Pay Act for pay disparities. If anything, I would say that employers should be leaning into pay equity audits right now. If youre concerned that lawsuits are going to start coming from new directions, then now is the time to do that pay equity audit and make sure youve got it right.
    0 Commentarii ·0 Distribuiri ·44 Views
  • How machine learning can be used to identify microplastics
    www.fastcompany.com
    Microplasticsthe tiny particles of plastic shed when litter breaks downare everywhere, from the deep sea to Mount Everest, and many researchers worry that they could harm human health.I am a machine learning researcher. With a team of scientists, I have developed a tool to make identification of microplastics using their unique chemical fingerprint more reliable. We hope that this work will help us learn about the types of microplastics floating through the air in our study area, Michigan.Microplasticsa global problemThe term plastic refers to a wide variety of artificially created polymers. Polyethylene, or PET, is used for making bottles; polypropylene, or PP, is used in food containers; and polyvinyl chloride, or PVC, is used in pipes and tubes.Microplastics are small plastic particles that range in size from 1 micrometer to 5 millimeters. The width of a human hair, for comparison, ranges from 20 to 200 micrometers.Most scientific studies focus on microplastics in water. However, microplastics are also found in the air. Scientists know much less about microplastics in the atmosphere.When scientists collect samples from the environment to study microplastics, they usually want to know more about the chemical identities of the microplastic particles found in the samples.Fingerprinting microplasticsJust as fingerprinting uniquely identifies a person, scientists use spectroscopy to determine the chemical identity of microplastics. In spectroscopy, a substance either absorbs or scatters light, depending on how its molecules vibrate. The absorbed or scattered light creates a unique pattern called the spectrum, which is effectively the substances fingerprint.Just like a forensic analyst can match an unknown fingerprint against a fingerprint database to identify the person, researchers can match the spectrum of an unknown microplastic particle against a database of known spectra.However, forensic analysts can get false matches in fingerprint matching. Similarly, spectral matching against a database isnt foolproof. Many plastic polymers have similar structures, so two different polymers can have similar spectra. This overlap can lead to ambiguity in the identification process.So, an identification method for polymers should provide a measure of uncertainty in its output. That way, the user can know how much to trust the polymer fingerprint match. Unfortunately, current methods dont usually provide an uncertainty measure.Data from microplastic analyses can inform health recommendations and policy decisions, so its important for the people making those calls to know how reliable the analysis is.Conformal predictionMachine learning is one tool researchers have started using for microplastic identification.First, researchers collect a large dataset of spectra whose identities are known. Then, they use this dataset to train a machine learning algorithm that learns to predict a substances chemical identity from its spectrum.Sophisticated algorithms whose inner workings can be opaque make these predictions, so the lack of an uncertainty measure becomes an even greater problem when machine learning is involved.Our recent work addresses this issue by creating a tool with an uncertainty quantification for microplastic identification. We use a machine learning technique called conformal prediction.Conformal prediction is like a wrapper around an existing, already trained machine learning algorithm that adds an uncertainty quantification. It does not require the user of the machine learning algorithm to have any detailed knowledge of the algorithm or its training data. The user just needs to be able to run the prediction algorithm on a new set of spectra.To set up conformal prediction, researchers collect a calibration set containing spectra and their true identities. The calibration set is often much smaller than the training data required for training machine learning algorithms. Usually just a few hundred spectra are enough for calibration.Then, conformal prediction analyzes the discrepancies between the predictions and correct answers in the calibration set. Using this analysis, it adds other plausible identities to the algorithms single output on a particular particles spectrum. Instead of outputting one, possibly incorrect, prediction like this particle is polyethylene, it now outputs a set of predictionsfor example, this particle could be polyethylene or polypropylene.The prediction sets contain the true identity with a level of confidence that users can set themselvessay, 90%. Users can then rerun the conformal prediction with a higher confidencesay, 95%. But the higher the confidence level, the more polymer predictions given by the model in the output.It might seem that a method that outputs a set rather than a single identity isnt as useful. But the size of the set serves as a way to assess uncertaintya small set indicates less uncertainty.On the other hand, if the algorithm predicts that the sample could be many different polymers, theres substantial uncertainty. In this case, you could bring in a human expert to examine the polymer closely.Testing the toolTo run our conformal prediction, my team used libraries of microplastic spectra from the Rochman Lab at the University of Toronto as the calibration set.Once calibrated, we collected samples from a parking lot in Brighton, Michigan, obtained their spectra, and ran them through the algorithm. We also asked an expert to manually label the spectra with the correct polymer identities. We found that conformal prediction did produce sets that included the label the human expert gave it.Microplastics are an emerging concern worldwide. Some places such as California have begun to gather evidence for future legislation to help curb microplastic pollution.Evidence-based science can help researchers and policymakers fully understand the extent of microplastic pollution and the threats it poses to human welfare. Building and openly sharing machine learning-based tools is one way to help make that happen.Ambuj Tewari is a professor of statistics at the University of Michigan.This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Commentarii ·0 Distribuiri ·46 Views
  • 0 Commentarii ·0 Distribuiri ·43 Views
  • 0 Commentarii ·0 Distribuiri ·40 Views
  • Whats Driving Teslas Woes?
    www.wired.com
    As Tesla faces a global sales slump, and with shares down for the seventh consecutive week, could Elon Musk's antics really be to blame?
    0 Commentarii ·0 Distribuiri ·44 Views
  • Starship Explosions Show SpaceX No Longer Defying Gravity
    www.nytimes.com
    Consecutive losses of the Starship rocket suggest that the companys engineers are not as infallible as its fans may think.
    0 Commentarii ·0 Distribuiri ·48 Views
  • CMU Researchers Introduce PAPRIKA: A Fine-Tuning Approach that Enables Language Models to Develop General Decision-Making Capabilities Not Confined to Particular Environment
    www.marktechpost.com
    In todays rapidly evolving AI landscape, one persistent challenge is equipping language models with robust decision-making abilities that extend beyond single-turn interactions. Traditional large language models (LLMs) excel at generating coherent responses but often struggle with multi-step problem solving or interacting with dynamic environments. This shortfall largely stems from the nature of the training data, which rarely reflects the structured, interactive experiences that real-world scenarios demand. Moreover, directly deploying models to gather real-world interaction data can be both costly and risky. Hence, there is a clear need for methodologies that teach LLMs to explore, gather relevant information, and make thoughtful, sequential decisions in a safe and controlled manner.In response to these challenges, researchers from Carnegie Mellon University have developed an approach known as PAPRIKA. This method is designed to endow language models with general decision-making capabilities that are not limited to any single environment. Rather than relying on traditional training data, PAPRIKA leverages synthetic interaction data generated across a diverse set of tasks. These tasks range from classic guessing games like twenty questions to puzzles such as Mastermind and even scenarios simulating customer service interactions. By training on these varied trajectories, the model learns to adjust its behavior based on contextual feedback from its environmentwithout the need for additional gradient updates. This approach encourages the model to adopt a more flexible, in-context learning strategy that can be applied to a range of new tasks.Technical Details and BenefitsPAPRIKAs methodology is built on a two-stage fine-tuning process. The first stage involves exposing the LLM to a large set of synthetic trajectories generated using a method called Minp sampling, which ensures that the training data is both diverse and coherent. This step allows the model to experience a wide spectrum of interaction strategies, including both successful and less effective decision-making behaviors. The second stage refines the model using a blend of supervised fine-tuning (SFT) and a direct preference optimization (DPO) objective. In this setup, pairs of trajectories are compared, with the model gradually learning to favor those that lead more directly to task success.Recognizing that not all tasks are equally challenging, PAPRIKA also integrates a curriculum learning strategy. This component dynamically selects tasks based on their potential to offer meaningful learning experiences. By prioritizing tasks that yield richer learning signals, the approach enhances data efficiency and helps the model better generalize its decision-making strategies. The combination of these methods results in a refined model that is adept at sequential decision making across various contexts.Results and InsightsThe practical benefits of the PAPRIKA method are evident in its empirical results. In one illustrative example, the approach was applied to a bandit best arm selection taska scenario that requires careful allocation of a limited sampling budget to identify the most promising option. Here, PAPRIKA increased the average success rate notably, demonstrating a marked improvement in strategic decision-making. More broadly, when the model was trained on trajectories from a set of ten diverse task groups, its overall performance improved by approximately 47% compared to the baseline model, achieved with roughly 22,500 training trajectories.Further experiments using a leave-one-out evaluation demonstrated that the decision-making strategies learned through PAPRIKA could generalize to previously unseen tasks. For example, when the model was trained on all but one group of tasks, it still performed competitively on the omitted group. This finding suggests that the strategies developed through this fine-tuning method are not narrowly tailored to specific tasks but can be transferred across different decision-making scenarios. Moreover, a study involving curriculum learning showed that selectively sampling training tasks according to their difficulty could yield additional improvements, reinforcing the value of a tailored, data-driven approach to task selection.ConclusionIn summary, PAPRIKA represents a thoughtful and measured approach to bridging the gap between static language understanding and dynamic, sequential decision making. By harnessing synthetic interaction data and employing a carefully designed two-stage fine-tuning process augmented with curriculum learning, CMU researchers have demonstrated that LLMs can be refined into more adaptable decision makers. This method, rather than resorting to task-specific tuning, prepares models to engage in new challenges with minimal additional training.The capability to interact with external environments, collect pertinent information, and adjust decisions based on feedback is essential for any system designed to operate autonomously. While there remain challengessuch as ensuring a solid starting model and managing the computational costs of synthetic data generationPAPRIKA offers a promising avenue toward developing more versatile AI systems. Ultimately, as our models continue to advance, approaches like PAPRIKA will be important for creating tools that are not only proficient in language understanding but also capable of navigating complex, real-world decision-making tasks with subtlety and care.Check outthe Paper, GitHub Page and Model on Hugging Face.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our80k+ ML SubReddit. Asif RazzaqWebsite| + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/AutoAgent: A Fully-Automated and Highly Self-Developing Framework that Enables Users to Create and Deploy LLM Agents through Natural Language AloneAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Alibaba Researchers Propose START: A Novel Tool-Integrated Long CoT Reasoning LLM that Significantly Enhances Reasoning Capabilities by Leveraging External ToolsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide to Sentiment Analysis of Customer Reviews Using IBMs Open Source AI Model Granite-3B and Hugging Face TransformersAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AMD Releases Instella: A Series of Fully Open-Source State-of-the-Art 3B Parameter Language Model Recommended Open-Source AI Platform: IntellAgent is a An Open-Source Multi-Agent Framework to Evaluate Complex Conversational AI System' (Promoted)
    0 Commentarii ·0 Distribuiri ·48 Views