• WWW.WIRED.COM
    The Best N95, KF94, and KN95 Face Masks (2025)
    Wildfire season is coming. Here are the best disposable face coverings we’ve tested—and where you can find them.
    0 Kommentare 0 Anteile 39 Ansichten
  • GAMINGBOLT.COM
    Rematch Open Beta Surpasses 118,000 Concurrent Players on Steam
    Sifu developer Sloclap’s upcoming football mutltplayer action title Rematch has generated plenty of buzz since its announcement in December, and that’s being reflected in the player numbers it is attracing. An open beta of the multiplayer title recently went live on Steam, and players are flocking to it in droves. According to SteamDB, on Friday, the day the open beta went live, it saw a peak of 118,739 concurrent players on Steam. Clearly, the promise of its Rocket League-style football multiplayer action has turned heads, and the fact that it’s coming from the studio behind fan favourite title Sifu only serves to help as well. It should be interesting to see how it performs over the remainder of its open beta period. Rematch is launching for PS5, Xbox Series X/S, and PC on June 19, and will also be available via Game Pass day and date. The game will be priced at $29.99. Head on over here for creative director Pierre Tarno’s words on why Sloclap didn’t consider a free-to-play model.
    0 Kommentare 0 Anteile 50 Ansichten
  • WWW.MARKTECHPOST.COM
    LLMs Can Now Solve Challenging Math Problems with Minimal Data: Researchers from UC Berkeley and Ai2 Unveil a Fine-Tuning Recipe That Unlocks Mathematical Reasoning Across Difficulty Levels
    Language models have made significant strides in tackling reasoning tasks, with even small-scale supervised fine-tuning (SFT) approaches such as LIMO and s1 demonstrating remarkable improvements in mathematical problem-solving capabilities. However, fundamental questions remain about these advancements: Do these models genuinely generalise beyond their training data, or are they merely overfitting to test sets? The research community faces challenges in understanding which capabilities are enhanced through small-scale SFT and which limitations persist despite these improvements. Despite impressive performance on popular benchmarks, there is an incomplete understanding of these fine-tuned models’ specific strengths and weaknesses, creating a critical gap in knowledge about their true reasoning abilities and practical limitations. Various attempts have been made to understand the effects of reasoning-based supervised fine-tuning beyond simple benchmark scores. Researchers have questioned whether SFT merely improves performance on previously seen problem types or genuinely enables models to transfer problem-solving strategies to new contexts, such as applying coordinate-based techniques in geometry. Existing methods focus on factors like correctness, solution length, and response diversity, which initial studies suggest play significant roles in model improvement through SFT. However, these approaches lack the granularity needed to determine exactly which types of previously unsolvable questions become solvable after fine-tuning, and which problem categories remain resistant to improvement despite extensive training. The research community still struggles to establish whether observed improvements reflect deeper learning or simply memorisation of training trajectories, highlighting the need for more sophisticated analysis methods. The researchers from the University of California, Berkeley and the Allen Institute for AI propose a tiered analysis framework to investigate how supervised fine-tuning affects reasoning capabilities in language models. This approach utilises the AIME24 dataset, chosen for its complexity and widespread use in reasoning research, which exhibits a ladder-like structure where models solving higher-tier questions typically succeed on lower-tier ones. By categorising questions into four difficulty tiers, Easy, Medium, Hard, and Exh, the study systematically examines the specific requirements for advancing between tiers. The analysis reveals that progression from Easy to Medium primarily requires adopting an R1 reasoning style with long inference context, while Hard-level questions demand greater computational stability during deep exploration. Exh-level questions present a fundamentally different challenge, requiring unconventional problem-solving strategies that current models uniformly struggle with. The research also identifies four key insights: the performance gap between potential and stability in small-scale SFT models, minimal benefits from careful dataset curation, diminishing returns from scaling SFT datasets, and potential intelligence barriers that may not be overcome through SFT alone. The methodology employs a comprehensive tiered analysis using the AIME24 dataset as the primary test benchmark. This choice stems from three key attributes: the dataset’s hierarchical difficulty that challenges even state-of-the-art models, its diverse coverage of mathematical domains, and its focus on high school mathematics that isolates pure reasoning ability from domain-specific knowledge. Qwen2.5-32 B-Instruct serves as the base model due to its widespread adoption and inherent cognitive behaviours, including verification, backtracking, and subgoal setting. The fine-tuning data consists of question-response pairs from the Openr1-Math-220k dataset, specifically using CoT trajectories generated by DeepSeek R1 for problems from NuminaMath1.5, with incorrect solutions filtered out. The training configuration mirrors prior studies with a learning rate of 1 × 10−5, weight decay of 1 × 10−4, batch size of 32, and 5 epochs. Performance evaluation employs avg@n (average pass rate over multiple attempts) and cov@n metrics, with questions categorised into four difficulty levels (Easy, Medium, Hard, and Extremely Hard) based on model performance patterns. Research results reveal that effective progression from Easy to Medium-level mathematical problem-solving requires minimal but specific conditions. The study systematically examined multiple training variables, including foundational knowledge across diverse mathematical categories, dataset size variations (100-1000 examples per category), trajectory length (short, normal, or long), and trajectory style (comparing DeepSeek-R1 with Gemini-flash). Through comprehensive ablation studies, researchers isolated the impact of each dimension on model performance, represented as P = f(C, N, L, S), where C represents category, N represents the number of trajectories, L represents length, and S represents style. The findings demonstrate that achieving performance ≥90% on Medium-level questions minimally requires at least 500 normal or long R1-style trajectories, regardless of the specific mathematical category. Models consistently fail to meet performance thresholds when trained with fewer trajectories, shorter trajectories, or Gemini-style trajectories. This indicates that reasoning trajectory length and quantity represent critical factors in developing mathematical reasoning capabilities, while the specific subject matter of the trajectories proves less important than their structural characteristics. The research demonstrates that models with small-scale supervised fine-tuning can potentially solve as many questions as more sophisticated models like Deepseek-R1, though significant challenges remain. The primary limitation identified is instability in mathematical reasoning, rather than capability. Experimental results show that geometry-trained models can achieve a coverage score of 90, matching R1’s performance when given multiple attempts, yet their overall accuracy lags by more than 20%. This performance gap stems primarily from instability in deep exploration and computational limitations during complex problem-solving. While increasing the SFT dataset size offers one solution path, performance enhancement follows a logarithmic scaling trend with diminishing returns. Notably, the study challenges recent assertions about the importance of careful dataset curation, revealing that performance across various mathematical categories remains consistent within a narrow range of 55±4%, with only marginal differences between specifically constructed similar datasets and randomly constructed ones. This conclusion suggests that the quantity and quality of reasoning trajectories matter more than subject-specific content for developing robust mathematical reasoning capabilities. Here is the Paper and GitHub Page. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop The post LLMs Can Now Solve Challenging Math Problems with Minimal Data: Researchers from UC Berkeley and Ai2 Unveil a Fine-Tuning Recipe That Unlocks Mathematical Reasoning Across Difficulty Levels appeared first on MarkTechPost.
    0 Kommentare 0 Anteile 39 Ansichten
  • TOWARDSAI.NET
    PPO Explained and Its Constraints: Introducing PDPPO as an Alternative
    Author(s): Leonardo Kanashiro Felizardo Originally published on Towards AI. What is PPO, and Why is it Popular? Proximal Policy Optimization (PPO) has rapidly emerged as a leading model-free reinforcement learning (RL) method due to its simplicity and strong performance across various domains. PPO combines trust-region policy optimization and clipped objective optimization to ensure stable and efficient policy updates. Explanation of PPO PPO addresses the limitations of previous RL methods like vanilla policy gradient and TRPO (Trust Region Policy Optimization) by balancing exploration and exploitation through controlled policy updates. PPO specifically aims to stabilize training by preventing overly large policy updates, which could lead to catastrophic forgetting or divergence. Actor-Critic and the Role of Advantage Estimation PPO belongs to the family of actor-critic algorithms, where two models work together: The actor updates the policy π(θ,a|s) by selecting actions based on states. The critic evaluates the actor’s decisions by estimating the value function V(π,s). This architecture was first formalized by Konda and Tsitsiklis in their seminal work Actor-Critic Algorithms, as shown in Konda et at. [1], where they demonstrated convergence properties and laid the mathematical foundation for combining policy gradient methods with value function estimation. The advantage function is a critical concept in this setting, defined as: This is a minimal and clean example of how to implement an Actor-Critic architecture in PyTorch: import torchimport torch.nn as nnimport torch.optim as optim class ActorCritic(nn.Module): def __init__(self, state_dim, action_dim): super().__init__() self.shared = nn.Sequential(nn.Linear(state_dim, 128), nn.ReLU()) self.actor = nn.Linear(128, action_dim) self.critic = nn.Linear(128, 1) def forward(self, x): x = self.shared(x) return self.actor(x), self.critic(x)# Example usagestate_dim = 4action_dim = 2model = ActorCritic(state_dim, action_dim)optimizer = optim.Adam(model.parameters(), lr=3e-4)state = torch.rand((1, state_dim))logits, value = model(state)dist = torch.distributions.Categorical(logits=logits)action = dist.sample()log_prob = dist.log_prob(action)# Mock advantage and returnadvantage = torch.tensor([1.0])return_ = torch.tensor([[1.5]])# Actor-Critic lossactor_loss = -log_prob * advantagecritic_loss = (value - return_).pow(2).mean()loss = actor_loss + critic_loss# Backpropagationoptimizer.zero_grad()loss.backward()optimizer.step() PPO Objective and Mathematics The core idea behind PPO is the optimization of the policy network through a clipped objective function: Here: θ represents the parameters of the policy. ε is a hyperparameter typically small (e.g., 0.2) controlling how much the policy can change at each step. A is the advantage function, indicating the relative improvement of taking a specific action compared to the average action. The probability ratio is defined as: This ratio quantifies how much the probability of selecting an action has changed from the old policy to the new one. PyTorch Code Example: PPO Core import torchimport torch.nn as nnimport torch.optim as optim # Assume we already have: states, actions, old_log_probs, returns, values# And a model with .actor and .critic modulesclip_epsilon = 0.2gamma = 0.99# Compute advantagesadvantages = returns - valuesdiscounted_advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)# Get new log probabilities and state valueslog_probs = model.actor.get_log_probs(states, actions)ratios = torch.exp(log_probs - old_log_probs.detach())# Clipped surrogate objectivesurr1 = ratios * discounted_advantagessurr2 = torch.clamp(ratios, 1.0 - clip_epsilon, 1.0 + clip_epsilon) * discounted_advantagespolicy_loss = -torch.min(surr1, surr2).mean()# Critic loss (value function)value_estimates = model.critic(states)critic_loss = nn.MSELoss()(value_estimates, returns)# Total losstotal_loss = policy_loss + 0.5 * critic_loss# Backpropagationoptimizer.zero_grad()total_loss.backward()torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=0.5)optimizer.step() PPO’s Advantages and Popularity PPO’s popularity stems from its: Simplicity: Easier to implement and tune compared to other sophisticated methods like TRPO. Efficiency: Faster convergence due to the clipped surrogate objective, reducing the need for careful hyperparameter tuning. Versatility: Robust performance across a wide range of tasks including robotics, games, and operational management problems. Flaws and Limitations of PPO Despite PPO’s successes, it faces several limitations: High Variance and Instability: PPO’s reliance on sample-based estimates can cause significant variance in policy updates, especially in environments with sparse rewards or long horizons. Exploration Inefficiency: PPO typically relies on Gaussian noise for exploration, which can lead to insufficient exploration, especially in complex, high-dimensional state spaces. Sensitivity to Initialization: PPO’s effectiveness can vary greatly depending on initial conditions, causing inconsistent results across training runs. Enter PDPPO: A Novel Improvement To overcome these limitations, Post-Decision Proximal Policy Optimization (PDPPO) introduces a novel approach using dual critic networks and post-decision states. Understanding Post-Decision States Post-decision states, introduced by Warren B. Powell [2], provide a powerful abstraction in reinforcement learning. A post-decision state represents the environment immediately after an agent has taken an action but before the environment’s stochastic response occurs. This allows the learning algorithm to decompose the transition dynamics into two parts: Deterministic step (decision): This representes the state right after the deterministric effects take place. Stochastic step (nature’s response): As soon as we observe the deterministric effects, we also account for the stochastic variables that change the state. Where: f represents the deterministic function mapping the current state and action to the post-decision state sˣ. η is a random variable capturing the environment’s stochasticity. g defines how this stochastic component affects the next state. s’ is the next state Example: Frozen Lake Imagine the Frozen Lake environment. The agent chooses to move right from a given tile. The action is deterministic — the intention to move right is clear. This gives us the post-decision state sˣ: “attempted to move right.” However, because the ice is slippery, the agent may not land on the intended tile. It might slide right, down, or stay in place, with a certain probability for each. That final position — determined after the slippage — is the true next state s’. This decomposition allows value functions to be better estimated: Pre-decision value function: Post-decision value function: This formulation helps decouple the decision from stochastic effects, reducing variance in value estimation and improving sample efficiency. Post-Decision Advantage Calculation Given both critics, PDPPO computes the advantage as: And selects the most informative advantage at each step: This “maximum advantage” strategy allows the actor to favor the most promising value estimate during learning. Updating the Critics and Policy Critic loss functions: Combined actor-critic loss: This architecture, with separate value estimators for deterministic and stochastic effects, enables more stable learning in environments with complex uncertainty. Dual Critic Networks PDPPO employs two critics: State Critic: Estimates the value function based on pre-decision states. Post-Decision Critic: Estimates the value function based on post-decision states. The dual-critic approach improves value estimation accuracy by capturing both deterministic and stochastic dynamics separately. PyTorch Code Example: PDPPO Core import torchimport torch.nn as nnimport torch.optim as optim # Assume: states, actions, old_log_probs, returns, post_returns, # model with actor, critic, post_decision_criticclip_epsilon = 0.2# --- 1. Compute advantages from both critics ---values = model.critic(states)post_values = model.post_decision_critic(post_states)adv_pre = returns - valuesadv_post = post_returns - post_values# Use the max advantage (PDPPO twist)advantages = torch.max(adv_pre, adv_post)advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)# --- 2. Policy loss: same PPO-style clip ---log_probs = model.actor.get_log_probs(states, actions)ratios = torch.exp(log_probs - old_log_probs.detach())surr1 = ratios * advantagessurr2 = torch.clamp(ratios, 1.0 - clip_epsilon, 1.0 + clip_epsilon) * advantagespolicy_loss = -torch.min(surr1, surr2).mean()# --- 3. Dual critic loss ---critic_loss = nn.MSELoss()(values, returns)post_critic_loss = nn.MSELoss()(post_values, post_returns)# Total loss with dual critictotal_loss = policy_loss + 0.5 * (critic_loss + post_critic_loss)# --- 4. Backpropagation ---optimizer.zero_grad()total_loss.backward()torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=0.5)optimizer.step() PDPPO vs PPO in Practice Tests on environments such as Frozen Lake and Stochastic Lot-sizing highlight PDPPO’s significant performance improvements as in Felizardo et al. [3]: Improved Stability Across Seeds PDPPO showed lower variance in both cumulative and maximum rewards across different random seeds, particularly in stochastic environments like Frozen Lake. This indicates greater robustness to initialization compared to PPO, which often suffers from unstable learning in such settings.Faster and Smoother Convergence The learning curves of PDPPO are notably smoother and consistently trend upward, while PPO’s often stagnate or oscillate. This suggests that PDPPO’s dual-critic structure provides more accurate value estimates, enabling more reliable policy updates.Better Scaling with Dimensionality In the Stochastic Lot-Sizing tasks, PDPPO’s performance gap widened as the problem dimensionality increased (e.g., 25 items and 15 machines). This demonstrates that PDPPO scales better in complex settings, benefiting from its decomposition of dynamics into deterministic and stochastic parts.More Informative Advantage Estimates By using the maximum of pre- and post-decision advantages, PDPPO effectively captures the most optimistic learning signal at each step — leading to better exploitation of promising strategies without ignoring the stochastic nature of the environment.Better Sample Efficiency Empirical results showed that PDPPO achieved higher rewards using fewer training episodes, making it more sample-efficient — an essential trait for real-world applications where data collection is expensive. Empirical comparison (20–30 Runs) PDPPO significantly outperforms PPO across three environment configurations of the Stochastic Lot-Sizing Problem. The shaded areas represent 95% confidence intervals. Faster convergence Higher peak performance, and Tighter variance bands for PDPPO. A few other alternatives A few other alternatives to address the limitations of PPO include: Intrinsic Exploration Module (IEM) Proposed by Zhang et al. [8], this approach enhances exploration by incorporating uncertainty estimation into PPO. It addresses PPO’s weak exploration signal by rewarding novelty, especially useful in sparse reward settings.Uncertainty-Aware TRPO (UA-TRPO) Introduced by Queeney et al. [7], UA-TRPO aims to stabilize policy updates in the presence of finite-sample estimation errors by accounting for uncertainty in the policy gradients — offering a more robust learning process than standard PPO.Dual-Critic Variants Previous methods, like SAC [4] and TD3 [5], use dual critics mainly for continuous action spaces to reduce overestimation bias. However, they typically do not incorporate post-decision states nor are designed for environments with both deterministic and stochastic dynamics.Post-Decision Architectures in OR Earlier work in operations research (e.g., Powell [2], Hull [6]) used post-decision states to manage the curse of dimensionality in approximate dynamic programming. PDPPO brings this insight into deep RL by using post-decision value functions directly in the learning process. Each of these methods has its trade-offs, and PDPPO stands out by directly tackling the challenge of stochastic transitions via decomposition and dual critics — making it particularly effective in noisy, real-world-like settings. Citation [1] Konda, V. R., & Tsitsiklis, J. N. (2000). Actor-Critic Algorithms. In S.A. Solla, T.K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information Processing Systems, Vol. 12. MIT Press. [2] Powell, W. B. (2007). Approximate Dynamic Programming: Solving the Curses of Dimensionality (2nd ed.). John Wiley & Sons. [3] Felizardo, L. K., Fadda, E., Nascimento, M. C. V., Brandimarte, P., & Del-Moral-Hernandez, E. (2024). A Reinforcement Learning Method for Environments with Stochastic Variables: Post-Decision Proximal Policy Optimization with Dual Critic Networks. arXiv preprint arXiv:2504.05150. https://arxiv.org/pdf/2504.05150 [4] Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In Proceedings of the 35th International Conference on Machine Learning (ICML). [5] Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing Function Approximation Error in Actor-Critic Methods. In Proceedings of the 35th International Conference on Machine Learning (ICML). [6] Hull, I. (2015). Approximate Dynamic Programming with Post-Decision States as a Solution Method for Dynamic Economic Models. Journal of Economic Dynamics and Control, 55, 57–70. [7] Queeney, J., Paschalidis, I. C., & Cassandras, C. G. (2021). Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach. In Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 9377–9385. [8] Zhang, J., Zhang, Z., Han, S., & Lü, S. (2022). Proximal Policy Optimization via Enhanced Exploration Efficiency. Information Sciences, 609, 750–765. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI
    0 Kommentare 0 Anteile 42 Ansichten
  • WWW.IGN.COM
    Hayden Christensen Is Officially Returning as Anakin Skywalker in Ahsoka Season 2 - Star Wars Celebration
    It was just revealed at Star Wars Celebration that Hayden Christensen will officially return as Anakin Skywalker in Season 2 of Ahsoka.While we didn't learn much about what role Anakin will play in Season 2, it will undoubtedly be exciting news for fans that Ahsoka's time with her former master has not yet come to an end.Christensen stopped by the Ahsoka panel at Star Wars Celebration and shared what it was like returning to the character."It was a dream to get to do," Christensen said. "The way they conceived how to do it was brilliant in getting to explore the World Between Worlds. I thought it was all really exciting." Ahsoka series creator Dave Filoni then joked that he had to find a way to work with Christensen/Anakin again and had to "invent entire dimensions to make it happen."Christensen also shared that he and the team had many conversations about what else Anakin was up to during the Clone Wars."All of this had been presented well in the animated world, but I was really excited to do that in live action," Christensen said. "As much as I love the traditional Jedi robes I wore during the prequels, it was exciting to get to see Anakin with a new look."PlayLater in the panel, Filoni talked about how their shared work experience with George Lucas helped form a bond when determining how to bring back Anakin. This let them fill in each other's gaps in knowledge and let them create a really full interpretation of the character."I always have George's voice in the back of my head saying, 'faster, more intense!'" Christensen added.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst and on TikTok.
    0 Kommentare 0 Anteile 49 Ansichten
  • 9TO5MAC.COM
    Apple TV+ just pulled a George Lucas with Mythic Quest
    Apple TV+ recently announced that its longest-running comedy, Mythic Quest, has been canceled. But there was some good news thrown in with the bad. Apple gave its blessing for the creative team to revise the finale for a more fitting ending, but that has introduced a George Lucas-eque problem. Revised Mythic Quest finale available now, while the original goes missing George Lucas, creator of Star Wars, has famously (infamously?) enjoyed revising his films over the years. It’s been a while since this happened, but the original Star Wars trilogy received multiple changes over time. The existence of these changes may not be controversial on its own. Fans have generally welcomed Lucas’ creative efforts to a degree. The problem is, when Lucas makes a Star Wars revision, he makes it very difficult to get access to older versions. And Apple, it seems, has followed that same trend with Mythic Quest. The revised season 4 finale (nay, series finale) went live today. From what I can tell, the changes aren’t significant—mainly the ending wraps up a little differently so it’s less open ended. I can’t definitively comment on changes, though, because there’s no way to go back and rewatch the original. In George Lucas fashion, now that the finale has been updated, Apple TV+ no longer offers a way to watch the original episode. Will this cause Star Wars levels of disappointment from fans? Unlikely. But it’s worth noting that the Mythic Quest finale has been changed, and its first version is gone. I wish it was still available so fans could experience both versions. But at least as of now, that’s not happening. Apple TV+ is currently available for just *$2.99* per month with a limited-time special. Best iPhone accessories Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Kommentare 0 Anteile 45 Ansichten
  • FUTURISM.COM
    The Cybertruck Is Turning Into a Complete Disaster
    Tesla is reportedly pulling back production of its much-maligned Cybertruck.According to Business Insider, the EV maker has dropped production targets for several Cybertruck lines over the last few months – and is going as far as to move workers from the line entirely at its manufacturing facility in Texas."It feels a lot like they're filtering people out," a worker with knowledge of the situation told the publication. "The parking lot keeps getting emptier."It's yet another sign Tesla's highly divisive pickup truck is turning out to be a major sore point for the carmaker. According to an eighth recall issued last month, Tesla has sold a mere 50,000 Cybertrucks since it went on sale in late 2023.That's well short of the "quarter million Cybertrucks a year," which the company would "reach sometime in 2025," as Tesla CEO Elon Musk promised during the company's Q3 earnings call in 2023.As Electrek pointed out last week, 2,400 unsold Cybertrucks, worth roughly $200 million, are piling up. Even Tesla is no longer accepting its own trucks as trade-ins, due to the massive disparity in demand.The carmaker is facing plummeting sales worldwide across the board, with fuming investors accusing Musk of having abandoned the company in favor of gutting the United States' government.A number of high-profile execs have left the company over the last couple of weeks, including longtime Tesla software VP David Lau, continuing a major leadership exodus that kicked up late last year.Tesla shares are down almost 50 percent since hitting all-time highs in December, shortly after president Donald Trump was elected.Meanwhile, the Cybertruck has turned into a major target of a growing anti-Musk movement, dubbed Tesla Takedown.Whether a recently-released entry-level Cybertruck, which still goes for an eyewatering $69,990 before incentives, will reinvigorate interest in the issue-ridden vehicle remains to be seen.Tesla has since slashed prices in a last-ditch effort to inflate demand and offload "Foundation Series" trucks, which the company stopped building in October 2024, per Electrek.By many accounts, the rollout of what was once Musk's "pet project," has been a disaster, from delivering lemons to giant trim pieces that become delaminated.The timing couldn't be worse, as Tesla's brand crisis continues to deepen. The longer resentment for the company's mercurial CEO continues to simmer, the harder it will get for Tesla to breathe new life into lagging demand.And the competition keeps on growing, with Chinese EV maker BYD outpacing Tesla's revenue numbers for the first time last year.Next week, Tesla is expected to hold a key Q1 earnings call — and given its current predicament, investors will likely have some burning questions for Musk.More on the Cybertruck: Tesla Shows Off Cheaper and Slower Cybertruck That's an Even Worse DealShare This Article
    0 Kommentare 0 Anteile 49 Ansichten
  • SCREENCRUSH.COM
    10 Classic Songs You Didn’t Know Were Written for Movies
    There’s an art to the perfect movie soundtrack single. It’s gotta be catchy, it’s gotta be thematically appropriate, and it’s gotta be something the people will remember long after the release of the movie it was written for. This trend goes in and out of fashion. It was popular in the 1970s and ’80s when radio play was just as important as box office numbers, and has had a bit of a resurgence in recent years (even the Avatar movies have had original songs that play over the credits).Some original soundtrack songs are so synonymous with their movies it would be impossible to separate one from the other. You can’t listen to “Over the Rainbow” from The Wizard of Oz, “Lose Yourself” from Eminem’s 8 Mile, “Take My Breath Away” from Top Gun, “My Heart Will Go On” from Titanic without also considering the context.It’s also a bit of an easy bid for an Academy Award — if you don’t think the movie itself is a shoo-in for Best Picture, why not tack a tune onto the end credits to get that Best Original Song nomination? Sometimes, though, the songs are so good and get so popular that the movies they were originally written for are left behind, and people forget that one of their favorite rap tracks, for example, was written for a movie where Michelle Pfeiffer plays an inner city teacher.Let this list serve as a reminder, then, of the origins of some of your favorite summer jams and ballads whose runaway fame has divorced them from their humble origins as movie soundtrack singles. These songs run the gamut from pop hits to atmospheric themes to stadium rock hymns and everything in between, including one impudent number about the infidelity of someone’s girlfriend. You know the songs, now it’s time to watch (or rewatch) the movies.Great Songs You Forgot Were Originally Written for MoviesThese songs ended up being even more popular than the movies they were written for.Gallery Credit: Emma StefanskyREAD MORE: The Worst Movies Where Actors Play Distracting Double RolesGet our free mobile app12 Pairs of Actors and Directors With Famous FeudsYou won't see these great actors and directors working together again. Gallery Credit: Emma StefanskyFiled Under: Leonardo DiCaprio, Michelle Pfeiffer, Richard Gere, Robert Redford, Tom Cruise
    0 Kommentare 0 Anteile 43 Ansichten
  • WEWORKREMOTELY.COM
    EF LAW FIRM: Remote Paralegal – VIDEO INTERVIEW REQUIRED (50–60 hrs/week | $1,200–$1,800/mo)
    🚨 READ FIRST – DO NOT SKIPYOU MUST COMPLETE YOUR VIDEO INTERVIEW TO APPLYMessages and emails will NOT be responded to if you do not read this post and complete the interview below.About UsWe’re EF LAW FIRM, a U.S.-based estate planning law firm helping families protect their loved ones through customized wills, trusts, and legal plans.We work 100% remotely. We’re systems-first, fast-moving, and client-focused. Every week, we help dozens of clients get legally solid estate plans finalized—quickly and accurately.About the RoleWe’re hiring a Remote Paralegal / Legal Drafter to join our legal ops team.You’ll take structured info from client forms and turn it into accurate, polished legal drafts.You won’t be writing from scratch—you’ll be assembling and formatting using templates.You’ll get full training and systems to follow.✅ This Role Is FOR YOU If:You speak and write excellent English (fluency will be tested)You love checklists, templates, and accurate workYou take ownership of your resultsYou’re ready to work 50–60 hours/week, full-timeYou have fast internet, a quiet workspace, and a modern computerYou improve with repetition and enjoy becoming a master of your processYou’re reliable, responsive, and take pride in consistency🚫 DO NOT Apply If:You want part-time, freelance, or “side work”You’re only available 20–30 hours/weekYou hate repetitive workYou skim job posts and ignore instructionsYou resist feedback or go dark under pressureYou try to bypass systems or ignore chain of command💸 Compensation$1,200–$1,800/month (fixed salary)Paid training + performance bonusesLong-term, full-time growth opportunity🔧 Tools & Tech Requirements (You MUST Have):Mac or PC with 8GB RAM, Intel i5 / M1 chip or betterStable internet: 50 Mbps download / 10 Mbps upload minimumNoise-canceling headsetWebcam: 720p minimum, 1080p preferredQuiet, distraction-free workspaceAbility to remotely log into a U.S. workstation with low lag🧠 Software We Use:Google DriveMicrosoft WordSlackZoomPDF editorsRemote desktop access tools📝 How to Apply:Step 1: Complete your video interview:👉 https://careers.interviewer.ai/EF-Law-Firm/f495cde6-d58f-43b9-a1aa-49fb7878d017Step 2: Upload your resume inside the portalStep 3: Be ready to start immediately if selected⚡ Final WordThis is a serious, high-performance role. If you follow instructions, work hard, and care about doing things right—you’ll thrive here.We don’t micromanage. We don’t babysit. But we do reward people who deliver.If that’s you, let’s go.
    0 Kommentare 0 Anteile 50 Ansichten
  • WWW.CNET.COM
    Today's NYT Mini Crossword Answers for Saturday, April 19
    Here are the answers for The New York Times Mini Crossword for April 19.
    0 Kommentare 0 Anteile 41 Ansichten