• You Should Buy Your Tech Now Before You Cant Possibly Afford It
    gizmodo.com
    All your favorite tech is soon going to hit the wall of tariffs implemented by President Donald Trump. Already, the tariffs are forcing Nintendo to revise its Switch 2 preorder timeline, potentially making an expensive $450 handheld console even more expensive. If you want to buy a new laptop, phone, light, or practically any electronic, you should do it now, or else hold off buying anything new until the next hair-brained scheme snails its way from the Trump White House. Trumps White House implemented its tariff scheme on Wednesday, but Liberation Day, as the president called it, was met by confusion from the get-go. Eventually, reporters managed to confirm a 10% on practically every country or landmass the world over, including the uninhabited Heard and McDonald islands. The tariffs, epitomized by a 54% tariff on Chinese goods, are based on an obtuse mathematical formula that doesnt make a lot of sense but does make gadgets a lot pricier. Other neighboring countries like Vietnam (46% tariff), Cambodia (49%), and Taiwan (32%) were some of the hardest hit by Trumps decree. Unfortunately these are some of the countries that tech companies rely on most for manufacturing. Big tech firms invested heavily in Vietnamese manufacturing since the trade war between the U.S. and China started heating up last time Trump was in office. U.S. Census Bureau data shows the country counts as the sixth top source for imported goods to the U.S. Tech firms both big and small each have to determine how much of the cost they need to pass onto the consumer versus how much of the difference they want to eat themselves. Jason Miller, a professor of supply chain management at Michigan State University, told The Verge companies are going to be incentivized to lay a good share of the cost of tariffs onto downstream consumers. The only saving grace is that consumers have a small, shrinking window when they can buy some electronics before the price increases start in earnest. Miller told The Verge that companies have imported 70% more products into the U.S. over the past few months than in the same time in 2023. So for now, you may still find all your tech at the pre-tariff price.But dont expect the current prices to last. Its better to assume companies will raise prices sooner rather than later. In February, major PC and device maker Acer announced it was raising prices by around 10% due to tariffs. Its only a matter of time before they go even higher. Imagine if the base iPhone 16 cost over $1,100? That could be the state of affairs after Trump tariffs. Florence Ion / Gizmodo If you want another scary example of how tech could get much more expensive in the months ahead, look at Apple. Apple relies on Vietnam for much of its manufacturing. A report from Reuters based on industry analyst predictions proposed a $1,142 price tag for the iPhone 17 under these new tariffs. Thats close to $350 more expensive than an $800 iPhone 16. The highest-end iPhone Pro Max with max 1 TB storage might cost close to $2,300 using the same math.The Cheeto-in-chief suggested on Friday he could impose tariffs specifically targeting semiconductors very soon. This would take already expensive tech and make it untenable for most consumers in the U.S. The idea is these tariffs are supposed to bring manfucaturing back to the U.S, but it wont happen overnight. It takes time to build up the manufacturing apparatus, especially something as complicated as chips. Intel is set to receive $7.9 billion from the U.S. through the CHIPS act enacted under the tenure of President Joe Biden. The company was building an Ohio chip foundry, but that project has been delayed to an expected opening date of 2030. Trump recently pushed an executive order that would punish companies by withholding CHIPS funding if they didnt invest in U.S. manufacturing. So yes, your phones, tablets, and laptops with all their silicon could get way, way more expensive if Trump follows through on additional tariffs, but thats not all. As noted by CNN, modern cars rely on more than 1,000 semiconductor chips with some having over 3,000.Theres no upside here. The tariffs are already incentivizing layoffs at companies like Stellantis. So while economic uncertainty may be pushing you to hold off on any spending sprees, wouldnt you rather be unemployed with a Steam Deck than without a Steam Deck?
    0 Commentaires ·0 Parts ·22 Vue
  • Candy Crush Style Heart Burst Effect in UE5 Niagara
    www.youtube.com
    Join this channel to get access to perks:https://www.youtube.com/@cghow/join Candy Crush Style Heart Burst Effect in UE5 NiagaraFull Video - https://youtu.be/UI7RQ5RJdfQ FAB - https://www.fab.com/sellers/CGHOW Whatsapp - https://bit.ly/3LYvxjK Patreon- https://www.patreon.com/Ashif NFT - https://opensea.io/CGHOW Twitter - https://twitter.com/cghow_ If you Liked it - http://bit.ly/2UZmiZ4 Channel Ashif - http://bit.ly/3aYaniw Support me on - paypal.me/9953280644 #cghow #UE5 #UE4Niagara #gamefx #ue5niagara #ue4vfx #niagara #unrealengineniagara #realtimevfxVisit - https://cghow.com/ Unreal Engine Marketplace - https://bit.ly/3aojvAa Artstation Store - https://www.artstation.com/ashif/store Gumroad - https://cghow.gumroad.com/
    0 Commentaires ·0 Parts ·18 Vue
  • Why do kangaroos have 3 vaginas?
    www.livescience.com
    Female kangaroos have one tail, two feet and three vaginas when they're giving birth.
    0 Commentaires ·0 Parts ·15 Vue
  • Congrats @SpaceX Dragon team!
    x.com
    Congrats @SpaceX Dragon team!SpaceX:Fram2 marked our 50th Dragon mission overall and the official return of Dragon recovery operations to the West Coast
    0 Commentaires ·0 Parts ·13 Vue
  • Musk-Altman Fight Over OpenAI Overhaul Set for March Trial
    www.gadgets360.com
    Photo Credit: Bloomberg Elon Musk and Sam Altman worked together to found OpenAI in 2015 HighlightsElon Musk claims OpenAI retreated from its founding purpose as a charityThe billionaire launched his own AI startup in 2023The Altman-Musk trial will begin is scheduled to begin in less than yearAdvertisementA federal judge set a March trial in Elon Musk's challenge to Sam Altman's plans to overhaul OpenAI's business structure, setting the stage for a high-stakes clash between the two billionaires.US District Judge Yvonne Gonzalez Rogers in Oakland, California, put the trial on her calendar for March 16 during a hearing Friday after previously pledging to fast-track it rather than let it linger until 2027.Gonzalez Rogers last month rejected Musk's request to temporarily pause the ChatGPT maker's transformation from a nonprofit to a more conventional, public benefit for-profit company. But she called for an expedited trial on the core claim from Musk's 2024 lawsuit that OpenAI's restructuring plan is unlawful.Having a firm trial date could impact decisions made by OpenAI's board on choosing a strategy to shift to a for-profit business model. It's also possible that the trial will begin after OpenAI has completed this shift: The startup is already in talks with officials in Delaware and California over its restructuring plans, and the size of its latest funding round is partly dependent on completing its restructuring process by the end of 2025.Musk and Altman worked together to found OpenAI in 2015. Musk now claims that OpenAI retreated from its founding purpose as a charity when it accepted billions of dollars in backing from Microsoft Corp. starting in 2019, the year after he left OpenAI's board.The world's richest person launched his own artificial intelligence startup in 2023, and in late March xAI acquired the X social media platform, which he also controls, giving the new combined entity, called XAI Holdings, a value of more than $100 billion (roughly Rs. 8,55,000 crore), Bloomberg News reported.OpenAI has denied Musk's legal claims and has argued that his real agenda in the court fight is to advance xAI. It has asked the judge to dismiss Musk's suit as a bid to undermine a successful competitor after he was unable to seize control of OpenAI.OpenAI said earlier this week that it finalized a $40 billion (roughly Rs.3,42,000 crore) funding round led by SoftBank Group Corp. The deal values the company at $300 billion(roughly Rs. 25,65,000 crore) including dollars raised almost double the ChatGPT maker's previous valuation of $157 billion(roughly Rs. 13,42,350 crore) from when it raised money in October.If OpenAI doesn't complete its restructuring by the end of 2025, however, SoftBank would be able to reduce the amount of funding it's contributing to the round from $30 billion(roughly Rs.2,56,500 crore) to $20 billion (roughly Rs. 1,71,000 crore), as Bloomberg has reported, while OpenAI would have the option to find other investors. 2025 Bloomberg L.P. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: OpenAI, Elon Musk, Sam Altman, xAI, Artificial Intelligence, AI Related Stories
    0 Commentaires ·0 Parts ·17 Vue
  • Transformers Unleashed: Part 4Fine-Tuning for Custom Text Classification
    medium.com
    Transformers Unleashed: Part 4 Fine-Tuning for Custom Text ClassificationWelcome back! In the previous parts, we explored how to use pre-trained Transformer models directly via the Hugging Face pipeline for tasks like classification, NER, and QA. This is powerful, but what if you have a specific dataset or a niche task where the general-purpose models don't perform optimally?This is where Fine-Tuning comes in! Its arguably one of the most important techniques in the modern NLP toolkit. In this post, well learn:What fine-tuning is and why its so effective.The step-by-step workflow for fine-tuning a Transformer.How to use the Hugging Face datasets, transformers (specifically the Trainer API), and evaluate libraries to fine-tune a model for text classification.What is Fine-Tuning?Imagine you have a highly skilled linguist (the pre-trained Transformer) who understands grammar, semantics, and general world knowledge from reading billions of sentences online. Now, you want this linguist to become an expert specifically in classifying customer reviews for your product.Instead of teaching them language from scratch (which would take immense data and time), you fine-tune them. You take their existing knowledge and train them a little more, but only on examples relevant to your specific task (your customer reviews). During fine-tuning, the models parameters (weights) are adjusted slightly to specialize in the nuances of your dataset.Why Fine-Tune?Leverages Pre-training: It builds upon the vast knowledge already captured by large language models.Data Efficiency: Requires significantly less task-specific data compared to training from scratch.Performance: Often leads to state-of-the-art results on specific downstream tasks.Adaptability: Allows adapting powerful general models to specialized domains or datasets.The Fine-Tuning WorkflowFine-tuning typically involves these steps:Load Dataset: Get your task-specific labeled data (e.g., text and corresponding labels).Load Pre-trained Model & Tokenizer: Choose a base Transformer model (like BERT, DistilBERT, RoBERTa) suitable for your task and load it along with its tokenizer.Preprocess/Tokenize Dataset: Convert your text data into the numerical format the model expects using the tokenizer.Define Training Arguments: Specify parameters for the training process (learning rate, batch size, epochs, output directory, etc.).Define Evaluation Metrics: Choose how to measure your models performance (e.g., accuracy, F1-score).Instantiate the Trainer: Use the Hugging Face Trainer class, which handles the training loop, evaluation, and logging.Train (Fine-tune): Start the fine-tuning process.Evaluate: Assess the performance of your fine-tuned model on a separate test set.Step-by-Step ImplementationLets fine-tune a model for sentiment classification. Well use the datasets library to load data, transformers for the model and trainer, and evaluate for metrics.(Note: Fine-tuning requires computational resources. Running this code might take time, and using a GPU (available on platforms like Google Colab) is highly recommended.)# --- 1. Setup & Imports ---# Make sure necessary libraries are installed# !pip install transformers datasets evaluate torch -q# !pip install 'accelerate>=0.26.0' -q# 'accelerate' helps manage training on different hardware (CPU/GPU/TPU)import datasetsimport transformersimport evaluateimport torchimport numpy as npfrom transformers import pipeline print(f"Using Datasets version: {datasets.__version__}")print(f"Using Transformers version: {transformers.__version__}")print(f"Using Evaluate version: {evaluate.__version__}")print(f"Using PyTorch version: {torch.__version__}")# Check if GPU is availabledevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")print(f"Using device: {device}")# --- 2. Load Dataset ---# We'll use a small subset of the IMDB dataset for faster demonstration# You can replace 'imdb' with other datasets like 'sst2' from GLUE, or load your own CSV/JSONprint("\nLoading dataset...")# Load only 1000 examples for training and 1000 for testing to speed things uptrain_ds = datasets.load_dataset("imdb", split="train[:1000]")eval_ds = datasets.load_dataset("imdb", split="test[:1000]")# Inspect the datasetprint("\nDataset structure:")print(train_ds)print("\nExample entry:")print(train_ds[0])# --- 3. Load Pre-trained Model & Tokenizer ---# Choose a base model. DistilBERT is smaller and faster than BERT.model_name = "distilbert-base-uncased"print(f"\nLoading tokenizer and model for: {model_name}")tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)# Load the model for sequence classification.# num_labels should match the number of unique labels in our dataset (positive/negative -> 2)model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)model.to(device) # Move model to the GPU if available# --- 4. Preprocess/Tokenize Dataset ---print("\nTokenizing dataset...")# Create a function to tokenize the textdef tokenize_function(examples): # padding='max_length' pads to the model's max input size # truncation=True cuts sequences longer than max length return tokenizer(examples["text"], padding="max_length", truncation=True)# Apply the tokenization function to the entire dataset using .map()# batched=True processes multiple examples at once for speedtokenized_train_ds = train_ds.map(tokenize_function, batched=True)tokenized_eval_ds = eval_ds.map(tokenize_function, batched=True)# Remove the original 'text' column as it's no longer neededtokenized_train_ds = tokenized_train_ds.remove_columns(["text"])tokenized_eval_ds = tokenized_eval_ds.remove_columns(["text"])# Rename the 'label' column to 'labels' (expected by the Trainer)tokenized_train_ds = tokenized_train_ds.rename_column("label", "labels")tokenized_eval_ds = tokenized_eval_ds.rename_column("label", "labels")# Set the format to PyTorch tensorstokenized_train_ds.set_format("torch")tokenized_eval_ds.set_format("torch")print("\nTokenized dataset example:")print(tokenized_train_ds[0])# --- 5. Define Evaluation Metrics ---print("\nDefining evaluation metric (accuracy)...")metric = evaluate.load("accuracy")def compute_metrics(eval_pred): logits, labels = eval_pred # Get the predictions by finding the index with the highest logit predictions = np.argmax(logits, axis=-1) # Compute the accuracy return metric.compute(predictions=predictions, references=labels)# --- 6. Define Training Arguments ---print("\nDefining training arguments...")training_args = transformers.TrainingArguments( output_dir="./results", # Directory to save the model and logs num_train_epochs=1, # Reduce epochs for faster demo (usually 3-5) per_device_train_batch_size=8, # Reduce batch size if memory is limited per_device_eval_batch_size=8, warmup_steps=100, # Number of steps for learning rate warmup weight_decay=0.01, # Strength of weight decay regularization logging_dir='./logs', # Directory for storing logs logging_steps=10, # Log training info every 10 steps eval_strategy="epoch", # Changed from evaluation_strategy to eval_strategy save_strategy="epoch", # Save model checkpoint at the end of each epoch load_best_model_at_end=True, # Load the best model found during training metric_for_best_model="accuracy",# Use accuracy to determine the best model # push_to_hub=False, # Set to True to upload model to Hugging Face Hub)# --- 7. Instantiate the Trainer ---print("\nInstantiating Trainer...")trainer = transformers.Trainer( model=model, # The model to train args=training_args, # Training arguments train_dataset=tokenized_train_ds, # Training dataset eval_dataset=tokenized_eval_ds, # Evaluation dataset compute_metrics=compute_metrics, # Function to compute metrics tokenizer=tokenizer, # Tokenizer (needed for padding/batching))# --- 8. Train (Fine-tune) ---print("\nStarting fine-tuning...")try: train_result = trainer.train() print("\nFine-tuning finished.") # Log some training metrics metrics = train_result.metrics trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # Saves trainer state (important for resuming) trainer.save_model("./results/best_model") # Explicitly save the best modelexcept Exception as e: print(f"\nAn error occurred during training: {e}") print("Fine-tuning might require significant resources (GPU recommended).")# --- 9. Evaluate ---print("\nEvaluating the fine-tuned model...")try: eval_metrics = trainer.evaluate() print(f"\nEvaluation results:") print(eval_metrics) trainer.log_metrics("eval", eval_metrics) trainer.save_metrics("eval", eval_metrics)except Exception as e: print(f"\nAn error occurred during evaluation: {e}")# --- 10. Using the Fine-Tuned Model (Example) ---print("\nUsing the fine-tuned model for inference...")try: # Load the fine-tuned model using pipeline for simplicity # Make sure to specify the directory where the best model was saved fine_tuned_pipeline = pipeline("sentiment-analysis", model="./results/best_model", device=0 if torch.cuda.is_available() else -1) test_text_positive = "This is the best movie I have seen in years!" test_text_negative = "What a waste of time, the plot was terrible." print(f"Test Positive: '{test_text_positive}' -> Prediction: {fine_tuned_pipeline(test_text_positive)}") print(f"Test Negative: '{test_text_negative}' -> Prediction: {fine_tuned_pipeline(test_text_negative)}")except Exception as e: print(f"\nCould not load or run inference with the fine-tuned model: {e}") print("Ensure the model was saved correctly in './results/best_model'.")Explanation:Setup: Import necessary libraries and check for GPU availability.Load Dataset: We use load_dataset to fetch a small portion of the IMDB sentiment dataset.Load Model/Tokenizer: We load distilbert-base-uncased using AutoTokenizer and AutoModelForSequenceClassification, specifying num_labels=2 for our binary (positive/negative) classification task.Tokenize: We define a function to tokenize the text and use .map() for efficient batch processing. We also rename/remove columns as needed by the Trainer.Metrics: We load the accuracy metric from evaluate and define a compute_metrics function that the Trainer will call during evaluation.Training Arguments: TrainingArguments holds all hyperparameters and settings for the training run (output directories, batch sizes, epochs, learning rate, logging, saving strategies, etc.).Trainer: We instantiate the Trainer, bringing together the model, arguments, datasets, tokenizer, and metrics function.Train: trainer.train() kicks off the fine-tuning process. The Trainer handles the optimization loop, gradient updates, learning rate scheduling, and saving checkpoints.Evaluate: trainer.evaluate() runs the model on the evaluation dataset and computes the metrics using our compute_metrics function.Inference: We show how to load the fine-tuned model (saved by the Trainer in the output_dir) using pipeline and test it on new examples.DiscussionFine-tuning is a powerful technique for adapting large pre-trained models to your specific needs. The Hugging Face Trainer API simplifies this process significantly by abstracting away much of the boilerplate code involved in writing a training loop.Resources: Fine-tuning, even on small datasets/epochs, can be computationally intensive. Using a GPU is strongly recommended. Adjust batch sizes (per_device_train_batch_size) based on your available GPU memory.Hyperparameters: Finding the best TrainingArguments (learning rate, epochs, weight decay, etc.) often requires experimentation.Dataset Choice: The quality and size of your task-specific dataset heavily influence the fine-tuning results.Base Model Choice: Different base models (BERT, DistilBERT, RoBERTa, etc.) have different sizes, speeds, and performance characteristics. Choose one appropriate for your needs and resources.Whats Next?Congratulations! Youve learned how to fine-tune a Transformer model, a cornerstone of practical NLP. In Part 5: Bridging Languages with Machine Translation, well shift gears back to using pre-trained models but explore a different architecture (Sequence-to-Sequence) designed for tasks like translation.Stay tuned!
    0 Commentaires ·0 Parts ·13 Vue
  • Kirby Air Riders is Developed by Bandai Namco, Sakurai Reveals
    gamingbolt.com
    Kirby Air Riderswas one of several first-party titles that Nintendo announced for the Switch 2 during its recent Direct presentation, though it revealed little about the game with a brief CG teaser. Wedid,however, get confirmation that Masahiro Sakurai theSuper Smash Bros.mastermind who also directed 2003sKirby Air Ride is returning 22 years on to direct the sequel. Now, Sakurai himself has revealed another crucial tidbit. Taking to Twitter, Sakurai recently revealed thatKirby Air Ridersis going to be developed by Bandai Namco Studios. Of course, this isnt the first time that Bandai Namco will develop a Masahiro Sakurai-directed title, having led development onSuper Smash Bros. for Wii Uand3DS, as well asSuper Smash Bros. Ultimate. Bandai Namco has also worked on other first-party Nintendo titles in the form ofMario Kart 7, Mario Kat Tour, Mario Sports Superstars, andARMS. In 2023, Bandai Namco formally announced that it had established Studio 2 and Studio S, a team dedicated to working on projects commissioned by Nintendo. Kirby Air Riderswill launch exclusively for the Nintendo Switch 2 sometime this year.
    0 Commentaires ·0 Parts ·13 Vue
  • Bossa Games Upcoming Lost Skies Officially Due To Arrive In Early Access Later This Month
    wccftech.com
    Lost Skies, the upcoming survival adventure co-op game that involves flying from one airborne island to the next, has a date set for its Steam early access release: April 18, 2025. The news came from developer Bossa Games with a release date trailer showing off more gameplay and beauty shots for the game.A playable build of Lost Skies was available during February's Steam Next Fest, and according to a press release from Bossa Games, it was one of the more popular titles during the festival. An uplifting note for the studio, to be sure, after Bossa Games announced layoffs at the top of the year.It wasn't clear how many developers were laid off at the time, just that there were 40 team members left at Bossa Games to get Lost Skies across the finish line. It'll be interesting to see how the game's launch goes. Early access games are expected to not be perfect at launch, but it definitely couldn't have been easier to prepare Lost Skies for April 18th with fewer people working on the game.At the time, Bossa Games co-chief executive officer Henrique Olifiers said that the layoff was so the team could focus on "the late-stage production of Lost Skies and its upcoming launch, ensuring the game is successfully released and evolved for its players for the foreseeable future." A new gameplay trailer had been showcased for Lost Skies only two weeks ahead of the layoff announcement.That said, its popularity during Next Fest adds to the proof that Lost Skies is striking a chord with players. For Bossa Games' sake, hopefully, the early access launch will continue the momentum the Next Fest demo kicked up. The team aims to launch version 1.0 before the end of the year, though of course, this will depend on player feedback.Deal of the Day
    0 Commentaires ·0 Parts ·18 Vue
  • 0 Commentaires ·0 Parts ·12 Vue
  • Astro Bot PS5 Console Bundle Deals Still Available At Amazon
    www.gamespot.com
    PS5 Slim Astro Bot Console Bundle $449 See at Amazon See at Walmart See at Target See at Best Buy See at PS Direct PS5 Slim Digital Astro Bot Console Bundle $399 See at Amazon See at Walmart See at Target See at Best Buy See at PS Direct If you're in the market for a PlayStation 5 console, now might be the time to buy one. With Nintendo delaying Switch 2 preorders in the US to evaluate the impact of newly imposed tariffs on imported goods, future pricing for all products, including other video game hardware, could change in the coming months. PlayStation already raised the price of the PS5 in Japan last year, so a price increase in the US wouldn't be much of a surprise--even without factoring in the tariff situation.But there's another good reason to buy a PlayStation 5 right now: You can get a pretty awesome deal. PlayStation released PS5 Slim and PS5 Slim Digital Astro Bot bundles in March. These bundles are priced $50 lower than the console on its own, and you get a copy of one of the best PS5 exclusives. You're essentially saving $110 versus buying the console and Astro Bot ($60) separately.Amazon and Walmart are selling the bundles for $1 less than other retailers. These PS5 bundles will only be available for a limited time.Continue Reading at GameSpot
    0 Commentaires ·0 Parts ·14 Vue