• WWW.MARKTECHPOST.COM
    Researchers from Caltech, Meta FAIR, and NVIDIA AI Introduce Tensor-GaLore: A Novel Method for Efficient Training of Neural Networks with Higher-Order Tensor Weights
    Advancements in neural networks have brought significant changes across domains like natural language processing, computer vision, and scientific computing. Despite these successes, the computational cost of training such models remains a key challenge. Neural networks often employ higher-order tensor weights to capture complex relationships, but this introduces memory inefficiencies during training. Particularly in scientific computing, tensor-parameterized layers used for modeling multidimensional systems, such as solving partial differential equations (PDEs), require substantial memory for optimizer states. Flattening tensors into matrices for optimization can lead to the loss of important multidimensional information, limiting both efficiency and performance. Addressing these issues requires innovative solutions that maintain model accuracy.To address these challenges, researchers from Caltech, Meta FAIR, and NVIDIA AI developed Tensor-GaLore, a method for efficient neural network training with higher-order tensor weights. Tensor-GaLore operates directly in the high-order tensor space, using tensor factorization techniques to optimize gradients during training. Unlike earlier methods such as GaLore, which relied on matrix operations via Singular Value Decomposition (SVD), Tensor-GaLore employs Tucker decomposition to project gradients into a low-rank subspace. By preserving the multidimensional structure of tensors, this approach improves memory efficiency and supports applications like Fourier Neural Operators (FNOs).FNOs are a class of models designed for solving PDEs. They leverage spectral convolution layers involving higher-order tensors to represent mappings between function spaces. Tensor-GaLore addresses the memory overhead caused by Fourier coefficients and optimizer states in FNOs, enabling efficient training for high-resolution tasks such as Navier-Stokes and Darcy flow equations.Technical Details and Benefits of Tensor-GaLoreTensor-GaLores core innovation is its use of Tucker decomposition for gradients during optimization. This decomposition breaks tensors into a core tensor and orthogonal factor matrices along each mode. Key benefits of this approach include:Memory Efficiency: Tensor-GaLore projects tensors into low-rank subspaces, achieving memory savings of up to 75% for optimizer states.Preservation of Structure: Unlike matrix-based methods that collapse tensor dimensions, Tensor-GaLore retains the original tensor structure, preserving spatial, temporal, and channel-specific information.Implicit Regularization: The low-rank tensor approximation helps prevent overfitting and supports smoother optimization.Scalability: Features like per-layer weight updates and activation checkpointing reduce peak memory usage, making it feasible to train large-scale models.Theoretical analysis ensures Tensor-GaLores convergence and stability. Its mode-specific rank adjustments provide flexibility and often outperform traditional low-rank approximation techniques.Results and InsightsTensor-GaLore has been tested on various PDE tasks, showing notable improvements in performance and memory efficiency:Navier-Stokes Equations: For tasks at 10241024 resolution, Tensor-GaLore reduced optimizer memory usage by 76% while maintaining performance comparable to baseline methods.Darcy Flow Problem: Experiments revealed a 48% improvement in test loss with a 0.25 rank ratio, alongside significant memory savings.Electromagnetic Wave Propagation: Tensor-GaLore improved test accuracy by 11% and reduced memory consumption, proving effective for handling complex multidimensional data.ConclusionTensor-GaLore offers a practical solution for memory-efficient training of neural networks using higher-order tensor weights. By leveraging low-rank tensor projections and preserving multidimensional relationships, it addresses key limitations in scaling models for scientific computing and other domains. Its demonstrated success with PDEs, through memory savings and performance gains, makes it a valuable tool for advancing AI-driven scientific discovery. As computational demands grow, Tensor-GaLore provides a pathway to more efficient and accessible training of complex, high-dimensional models.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)
    0 Comments 0 Shares 28 Views
  • TOWARDSAI.NET
    How to Fine-Tune Language Models: First Principles to Scalable Performance
    Author(s): Ehssan Originally published on Towards AI. Image by AuthorIn this article, well explore the process of fine-tuning language models for text classification. Well do so in three levels: first, by manually adding a classification head in PyTorch* and training the model so you can see the full process; second, by using the Hugging Face* Transformers library to streamline the process; and third, by leveraging PyTorch Lightning* and accelerators to optimize training performance. By the end of this guide, youll have a well-rounded understanding of the fine-tuning workflow.IntroductionThe idea behind using fine-tuning in Natural Language Processing (NLP) was borrowed from Computer Vision (CV). The CV models were first trained on large datasets such as ImageNet to teach them the basic features of images such as edges or colors. These pretrained models were then fine-tuned on a downstream task such as classifying birds with a relatively small number of labeled examples.Fine-tuned models typically achieved a higher accuracy than supervised models trained from scratch on the same amount of labeled data.Despite the popularity and success of transfer learning in CV, for many years it wasnt clear what the analogous pretraining process was for NLP. Consequently, NLP applications required large amounts of labeled data to achieve high performance.How is Fine-tuning Different from Pretraining?With pretraining, language models gain a general understanding of languages. During this process, they learn language patterns but typically are not capable of following instructions or answering questions. In the case of GPT models, this self-supervised learning includes predicting the next word (unidirectional) based on their training data, which is often webpages. In the case of BERT (Bidirectional Encoder Representations from Transformers), learning involves predicting randomly masked words (bidirectional) and sentence-order prediction. But how can we adapt language models for our own data or our own tasks?Fine-tuning continues training a pretrained model to increase its performance on specific tasks. For instance, through instruction fine-tuning you can teach a model to behave more like a chatbot. This is the process for specializing a general purpose model like OpenAI* GPT-4 into an application like ChatGPT* or GitHub* Copilot. By fine-tuning your own language model, you can increase the reliability, performance, and privacy of your model while reducing the associated inference costs compared to subscription-based services, especially if you have a large volume of data or frequent requests.Fine-Tuning a Language Model for Text ClassificationPreprocessing and Preparing DataLoaderFeel free to skip this section if youre comfortable with preprocessing data. Throughout we assume that we have our labeled data saved in train, validation, and test csv files each with a text and a label column. For training, the labels should be numeric, so if thats not the case, youcan use a label_to_id dictionary such as {"negative": 0, "positive": 1} and do a mapping to get the desired format.For concreteness, we will use BERT as the base model and set the number of classification labels to 4. After running the code below, you are encouraged to swap BERT for DistilBERT which reduces the size of the BERT model by 40%, speeding inference by 60%, while retaining 97% of BERTs language understanding capabilities.A Quick Look at BERTBERT was introduced by Google in 2018 and has since revolutionized the field of NLP. Unlike traditional models that process text in a unidirectional manner, BERT is designed to understand the context of a word in a sentence by looking at both its left and right surroundings. This bidirectional approach allows BERT to capture the nuances of language more effectively.Key Features of BERTPretraining: BERT is pretrained on a massive corpus of text, including the entire Wikipedia and BookCorpus. The pretraining involves two tasks: Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).Architecture: BERT_BASE has 12 layers (transformer blocks), 768 hidden units, and 12 attention heads, totaling 110 million parameters.You can run this tutorial on Intel Tiber AI Cloud, using an Intel Xeon CPU instance. This platform provides ample computing resources for smooth execution of our code.import osimport torchfrom torch.utils.data import DataLoader, Datasetfrom transformers import AutoTokenizerimport pandas as pd# Parametersmodel_ckpt = "bert-base-uncased"num_labels = 4batch_size = 16num_workers = 6# Load the tokenizertokenizer = AutoTokenizer.from_pretrained(model_ckpt)# Custom Dataset classclass TextDataset(Dataset): def __init__(self, dataframe, tokenizer, max_length=512): self.data = dataframe self.tokenizer = tokenizer self.max_length = max_length def __len__(self): return len(self.data) def __getitem__(self, idx): row = self.data.iloc[idx] text = row["text"] # Replace "text" with your actual column name for text label = row["label"] # Replace "label" with your actual column name for labels # Tokenize the input text encoding = self.tokenizer( text, max_length=self.max_length, padding="max_length", truncation=True, return_tensors="pt", ) return { "input_ids": encoding["input_ids"].squeeze(0), # Remove batch dimension with squeeze "attention_mask": encoding["attention_mask"].squeeze(0), "label": torch.tensor(label, dtype=torch.long), }os.environ["TOKENIZERS_PARALLELISM"] = "false"# Load csv filestrain_df = pd.read_csv("train.csv")val_df = pd.read_csv("val.csv")test_df = pd.read_csv("test.csv")# Create Dataset objectstrain_dataset = TextDataset(train_df, tokenizer)val_dataset = TextDataset(val_df, tokenizer)test_dataset = TextDataset(test_df, tokenizer)# Create DataLoaderstrain_loader = DataLoader( train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)val_loader = DataLoader(val_dataset, batch_size=batch_size, num_workers=num_workers)test_loader = DataLoader(test_dataset, batch_size=batch_size, num_workers=num_workers)The Classification Token [CLS]The [CLS] token is typically added at the beginning of the input sequence in transformer-based models such as BERT and its variants. During fine-tuning, the model learns to assign meaningful information to the [CLS] token, which aggregates the input sequences context. The last hidden state corresponding to the [CLS] token is then used as a representation of the entire input, which can be passed through a classifier layer for downstream tasks like sentiment analysis, topic categorization, or any task requiring a decision based on the entire sequence. This mechanism allows the model to focus on both the global understanding of the text and task-specific features for accurate predictions.Unlike traditional models that may rely on static embeddings (like word2vec), transformers generate contextualized embeddings, so that the meaning of a token depends on the tokens around it. The [CLS] token, as it passes through the layers, becomes increasingly aware of the entire sequences meaning, which makes it a good summary representation for downstream tasks. For some tasks, especially those requiring finer-grained understanding, other strategies might be employed. For instance, for document classification, where every word contributes equally, some models use mean pooling over all token embeddings.Level 1: PyTorchIn this section, we manually add a classification head to the base model and do the fine-tuning. We achieve this using the AutoModelclass which converts the tokens (or rather token encodings) to embeddings and then then feeds them through the encoder stack to return the hidden states. While AutoModel is helpful for understanding the idea behind what were doing, to fine-tune for text classification its better practice to work withAutoModelForSequenceClassification instead, as we discuss below.import torchfrom torch import nnfrom transformers import AutoModel# Load the base model with AutoModel and add a classifierclass CustomModel(nn.Module): def __init__(self, model_ckpt, num_labels): super(CustomModel, self).__init__() self.model = AutoModel.from_pretrained(model_ckpt) # Base transformer model self.classifier = nn.Linear( self.model.config.hidden_size, num_labels ) # Classification head. The 1st parameter equals 768 for BERT as discussed above def forward(self, input_ids, attention_mask): # Forward pass through the transformer model outputs = self.model(input_ids=input_ids, attention_mask=attention_mask) # Use the [CLS] token (0-th token in the sequence) for classification cls_output = outputs.last_hidden_state[ :, 0, : ] # Shape: (batch_size, hidden_size) # Pass through the classifier head logits = self.classifier(cls_output) return logits# Initialize the modelmodel = CustomModel(model_ckpt, num_labels)# Loss function and optimizerloss_fn = nn.CrossEntropyLoss()optimizer = torch.optim.Adam(model.parameters(), lr=5e-5)# Training functiondef train(model, optimizer, train_loader, loss_fn): model.train() total_loss = 0 for batch in train_loader: optimizer.zero_grad() # Unpack the batch data input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] label = batch["label"] # Forward pass output = model(input_ids, attention_mask) # Compute loss loss = loss_fn(output, label) loss.backward() # Update the model parameters optimizer.step() total_loss += loss.item() print(f"Train loss: {total_loss / len(train_loader):.2f}")import torchdef evaluate(model, test_loader, loss_fn): model.eval() # Set model to evaluation mode total_loss = 0 total_acc = 0 total_samples = 0 with torch.no_grad(): # No gradient computation needed during evaluation for batch in test_loader: input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["label"] # Forward pass output = model(input_ids, attention_mask) # Compute loss loss = loss_fn(output, labels) total_loss += loss.item() # Compute accuracy predictions = torch.argmax(output, dim=1) total_acc += torch.sum(predictions == labels).item() total_samples += labels.size(0) # Calculate average loss and accuracy avg_loss = total_loss / len(test_loader) avg_acc = total_acc / total_samples * 100 print(f"Test loss: {avg_loss:.2f}, Test acc: {avg_acc:.2f}%")Finally, we can train, evaluate, and save the model.num_epochs = 3for epoch in range(num_epochs): train(model, optimizer, train_loader, loss_fn)evaluate(model, test_loader, loss_fn)torch.save(model.state_dict(), "./fine-tuned-model.pt")Level 2: Hugging Face TransformersNow, we use the convenience of AutoModelForSequenceClassification class that will add the classification head to the base model automatically. Compare this against what we did with the AutoModel class in the previous section!Also note that the Trainer class from Hugging Faces Transformerslibrary can directly handle Dataset objects without needing aDataLoader, as it automatically handles batching and shuffling foryou.from transformers import AutoModelForSequenceClassification, Trainer, TrainingArgumentsmodel = AutoModelForSequenceClassification.from_pretrained( model_ckpt, num_labels=num_labels)training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, warmup_steps=500, weight_decay=0.01, logging_dir="./logs", logging_steps=10, # Log every 10 steps evaluation_strategy="steps", save_steps=500, # Save model checkpoint every 500 steps load_best_model_at_end=True, # Load the best model at the end of training metric_for_best_model="accuracy",)# Train the modeltrainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset,)trainer.train()trainer.evaluate(test_dataset)Level 3: PyTorch LightningLightning is, in the words of its documentation, the deep learning framework with batteries included for professional AI researchers and machine learning engineers who need maximal flexibility while super-charging performance at scale.As we shall see, with a bit of additional organizational code, the Lightning Trainer automates the following:Epoch and batch iterationoptimizer.step(), loss.backward(), optimizer.zero_grad() callsCalling of model.eval(), enabling and disabling grads during evaluationCheckpoint Saving and LoadingLoggingAccelerator, Multi-GPU, and TPU Support (No .to(device) calls required.)Mixed-precision trainingYou can accelerate training with Intel Gaudi processors, which allow you to conduct more deep learning training at a lower expense. You can try an Intel Gaudi instance for free on Intel Tiber AI Cloud.import torchmetricsimport lightning as Lfrom lightning.pytorch.callbacks import ModelCheckpointfrom lightning.pytorch.loggers import TensorBoardLoggerfrom transformers import AutoModelForSequenceClassification# A LightningModule is a torch.nn.Module with added functionality.# It wraps around a regular PyTorch model.class LightningModel(L.LightningModule): def __init__(self, model, learning_rate=5e-5): super().__init__() self.learning_rate = learning_rate self.model = model self.val_acc = torchmetrics.Accuracy(task="multiclass", num_classes=num_labels) self.test_acc = torchmetrics.Accuracy(task="multiclass", num_classes=num_labels) def forward(self, input_ids, attention_mask, labels): return self.model(input_ids, attention_mask=attention_mask, labels=labels) def _shared_step(self, batch, batch_idx): outputs = self( batch["input_ids"], attention_mask=batch["attention_mask"], labels=batch["label"], ) return outputs def training_step(self, batch, batch_idx): outputs = self._shared_step(batch, batch_idx) self.log("train_loss", outputs["loss"]) return outputs["loss"] def validation_step(self, batch, batch_idx): outputs = self._shared_step(batch, batch_idx) self.log("val_loss", outputs["loss"], prog_bar=True) logits = outputs["logits"] self.val_acc(logits, batch["label"]) self.log("val_acc", self.val_acc, prog_bar=True) def test_step(self, batch, batch_idx): outputs = self._shared_step(batch, batch_idx) logits = outputs["logits"] self.test_acc(logits, batch["label"]) self.log("accuracy", self.test_acc, prog_bar=True) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate) return optimizermodel = AutoModelForSequenceClassification.from_pretrained( model_ckpt, num_labels=num_labels)lightning_model = LightningModel(model)callbacks = [ ModelCheckpoint(save_top_k=1, mode="max", monitor="val_acc") # Save top 1 model]logger = TensorBoardLogger(save_dir="./logs", name="fine-tuned-model")trainer = L.Trainer( max_epochs=3, callbacks=callbacks, accelerator="hpu", precision="bf16-mixed", # By default, HPU training uses 32-bit precision. To enable mixed precision, set the precision flag. devices="auto", logger=logger, log_every_n_steps=10,)trainer.fit(lightning_model, train_dataloaders=train_loader, val_dataloaders=val_loader)trainer.test(lightning_model, train_loader, ckpt_path="best")trainer.test(lightning_model, val_loader, ckpt_path="best")trainer.test(lightning_model, test_loader, ckpt_path="best")While the Transformers Trainer class supports distributed training, it doesnt offer the same level of integration and flexibility as Lightning when it comes to advanced features like custom callbacks, logging, and seamless scaling across multiple GPUs or nodes.Practical AdviceNow that youre familiar with the fine-tuning process, you mightwonder how you can apply it to your specific task. Heres somepractical advice:Collect real data for your task, or generate synthetic data. See, for instance, Synthetic Data Generation with Language Models: A Practical Guide.Fine-tune a relatively small model.Evaluate your language model on your test set, and on a benchmark if available for your task.Increase the training dataset size, base model size, and, if necessary, task complexity.Keep in mind that the standard or conventional fine-tuning of languagemodels as described in this writing can be expensive. Rather thanupdating all the weights and biases, we could update only the lastlayer as follows:# Freeze all layersfor param in model.parameters(): param.requires_grad = False# Unfreeze last layerfor param in model.pre_classifier.parameters(): param.requires_grad = Truefor param in model.classifier.parameters(): param.requires_grad = TrueIn future articles, we shall discuss more efficient fine-tuning techniques, so stay tuned!For more AI development how-to content, visit Intel AI Development.ResourcesAcknowledgmentsThe author thanks Jack Erickson for providing detailed feedback on an earlier draft of this work.Suggested Reading*Other names and brands may be claimed as the property of others.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comments 0 Shares 27 Views
  • WWW.IGN.COM
    AU Deals: $880 Off a Mega FPS Bundle, The Cheapest Copies of Sniper Elite Resistance, Kingdom Come II, and More!
    Hopefully you're still holidaying and if you are, you'll need games. Cheap ones. Along with the headliner deals that describe themselves, I'd like to highlight a pretty great preorder price for FF7 Rebirth on PC. I for one can't wait to see how brilliant it can look (and run) on some beefy hardware. That being said, let's move along and save a bunch more Gil on today's bargains below!In retro news, Im ensuring we all Dont Starve by baking a 10th birthday cake for said indie great. When I was famished for things to play on my newly acquired launch PS4, I was surprised when this out-of-nowhere roguelike nourished me for dozens of hours. Being a masochist, there was so much to love about thisthe complete lack of farming direction, the Burtonesque atmosphere of a storybook gone bad, and that increasingly bunghole-puckering stress of keeping shadow beasts and stomach grumbles at bay. Holds up quite well today, and you really ought to play this with a mate in the Dont Starve Together co-op variant.Sooo, in other words...do not starve? This Day in Gaming Aussie birthdays for notable games.- Don't Starve (PS4) 2014. GetTable of ContentsNice Savings for Nintendo SwitchSwitch OLED + SMB WonderThis looks and plays like the true next step for 2D Mario platformers. Wonder effects change each stage in both surprising and delightful ways, the Flower Kingdom makes for a vibrant and refreshing change of pace, and Elephant Mario steals the show. 9/10.Expiring Recent DealsNFS Hot Pursuit (-35%) - A$39Axiom Verge (-85%) - A$4.05Dragon's Dogma: Dark Arisen (-83%) - A$6.79Phoenix Wright Trilogy (-67%) - A$13.18Batman: Arkham Trilogy (-60%) - A$35.98Or gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch OLED + Mario Wonder: $539 $499 | Switch Original: $499 $448 | Switch OLED Black: $539 $479 | Switch OLED White: $539 $479 | Switch Lite: $329 $299 | Switch Lite Hyrule: $339 $299See itBack to topPurchase Cheap for PCHumble Choice Jan 2025One simple (and cancellable later) sign up fee to forever own the following: Against The Storm, Jagged Alliance 3, Blasphemous 2, Beneath Oresa, Fort Solis, Boxes: Lost Fragments, Dordogne, and The Pegasus Expedition.Borderlands Pandoras Box Col. (-94%) - A$52.28Chess Ultra (-70%) - A$5.55Road Redemption (-88%) - A$4.34FFVII Rebirth (-30%) - A$73.46Star Wars Bounty Hunter (-50%) - A$14.75Far Cry 5 (-85%) - A$13.49Far Cry 5 Gold (-85%) - A$20.24Far Cry 5 + New Dawn (-85%) - A$22.49Expiring Recent DealsGungrave G.O.R.E (-95%) - A$3.49Prince of Persia: TLC (-50%) - A$29.97Dragon Ball Z: Kakarot Del. (-77%) - A$27.58Far Cry 6 (-75%) - A$22.48LEGO Bricktales (-75%) - A$10.73Or just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: $649 | Steam Deck 512GB OLED: $899 | Steam Deck 1TB OLED: $1,049See it at SteamBack to topExciting Bargains for XboxLords of the FallenLords of the Fallen is an awesome soulslike with a fantastic dual-realities premise. Its not perfect, but Id armour up for it at this price in a second. 8/10, Great.Expiring Recent DealsFC 25 (-65%) - A$39Powerwave Dual Charger (-27%) - A$29Tekken 8 (-61%) - A$47Hogwarts Legacy [XO](-64%) - A$36Batman Arkham Col. (-41%) - A$49.85Jojo's Bizarre Adventure: All Star (-25%) - A$55.73Assassins Creed Shadows (-19%) - A$89Or just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box? Series X: $799 $729 | Series S Black: $549 $545 | Series S White:$499 $471 | Series S Starter: N/ASee itBack to topPure Scores for PlayStationAstro BotFinally back in stock after a few weeks or so. Astro Bot made me smile from beginning to end. A collection of endlessly inventive levels and fantastically fun abilities, it delivers joy in spades, never once becoming even remotely dull or repetitive.Expiring Recent DealsPS+ Monthly FreebiesYours to keep from Jan 7 with this subscriptionSuicide Squad: KTJL [PS5]NFS Hot Pursuit Remastered [PS4]The Stanley Parable: Ultra [PS4/5]Or purchase a PS Store Card.What you'll pay to 'Station.PS5 Slim Disc:$799 $759 | PS5 Slim Digital:679 $678 | PS VR2: $899 | PS VR2 + Horizon: $899 | PS5 Pro $1,199 | PS Portal: $329See itBack to topLegit LEGO DealsAnimal Crossing: K.K.s ConcertComes with 3 minifigures (including K.K.) and a drivable toy camper van vehicle.Sonic: Knuckles Emerald Shrine (-42%) - A$35City: Fire Station (-34%) - A$99Botanicals Wreath (-29%) - A$119.95Expiring Recent DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    0 Comments 0 Shares 27 Views
  • WWW.IGN.COM
    The Last of Us, Assassin's Creed Animation Support Studio Allegedly Harbored Crunch, Physical and Verbal Abuse
    Content Warning: The story contains details regarding physical and psychological abuse, as well as the death of a child.A new report from People Make Games has unearthed deeply disturbing allegations of workplace abuse at Brandoville Studios, an Indonesian animation support studio that has worked on games such as Assassin's Creed Shadows, Age of Empires 4, and The Last of Us: Part I Remake.Allegations of mistreatment at Brandoville Games were first reported by People Make Games back in 2021 as part of a broader story on how Western AAA studios effectively outsource crunch on major games to overseas developers, effectively sweeping it under the rug. However, People Make Games was urged to revisit its reporting on Brandoville more recently, after accusations went viral on Indonesia social media of continued crunch, as well as physical, verbal, and mental abuse.TheGamer reported on the allegations in September of last year, pointing at CEO Ken Lai's wife, Cherry Lai, as the instigator of much of the studio's worst issues. Numerous screenshots and videos verified by TheGamer as well as shared publicly across social media told a story of Lai's abusive behavior, which included sending threatening and insulting messages to staff, verbally berating and insulting them, and in at least one case, repeatedly physically assaulting an employee and ordering them to physically hurt themselves as "punishment" for poor performance. That employee, Christa Sydney, has shared much of her story publicly, including claiming Lai once slapped her head so hard it caused tinnitus, and at other times choking her, pushing her down the stairs, and forcing her to bang her head on the wall until she had a concussion.People Make Games' follow-up investigation goes further to verify and detail these claims, including sharing a video Sydney was allegedly forced to send to Lai of her slapping herself 100 times. In addition to Sydney, People Make Games' report details other accounts, including allegations that Lai pit employees against one another by verbally berating employees in the office or gossiping about them, forced employees to participate in Christian worship on a daily basis, and would insist on approving employee outfits every day before work. One former employee in the video claims Lai manipulated him into giving him a significant part of his salary.Another former employee, Syifana Afiati, tells a story of being overworked while pregnant to the point of being asked to work while she was in the hospital. Afiati's child was born prematurely, and she was asked to return to work just one month after giving birth, despite her maternity leave being three months and her son still being treated in intensive care. Her son died four months later. Three days after he passed away, Lai sent a message to company HR, strategizing on how to avoid supporting Afiati and deprive her of benefits while she was on leave.Ken Lai did not comment to People Make Games when asked, but Cherry Lai provided a statement to People Make Games: "To me, my part of the story is not important, as long as my team are good and safe now," but ghosted the outlet without much further commentary.Brandoville Studios was shut down last year. While Ken and Cherry Lai attempted to spin up a new company with some of the former Brandoville employees, LaiLai Studios it's unclear if the company is currently running or working on anything. As of September, Jakarta police were actively searching for Cherry Lai as part of an investigation into Brandoville. Lai suggested in an email to People Make Games that she may have fled to Hong Kong, but that has not yet been confirmed.IGN has reached out to Naughty Dog, Xbox, and Ubisoft for comment on their partnerships with Brandoville in light of this story and will update if a response is received.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.
    0 Comments 0 Shares 26 Views
  • WWW.CNET.COM
    We Love These Ground-Breaking EV Solutions at CES 2025
    CES 2025 might be reinventing the wheel this year, and we saw, drove and towed the latest the EV industry has to offer. We've seen AI-enabled cars about to hit the production line, high-concept solar-powered electric vehicles and an EV motor that promises to fix one of the industry's most fundamental problems.While the lack of public charging, range-anxiety and EV's price point have been cited as reasons for slow adoption, many experts say electrified transportation is our future. At one of the biggest technology shows in the world, the automotive industry is showing off new and innovative ways electric vehicles could solve its adoption problems -- and maybe even circumvent the charging problem altogether.This list is sure to be updated as more car-tech is shown off, but here's everything a gearhead needs to know right now.The newest car-related techDonut Labs EV Motor Antuan Goodwin/CNETOne of the biggest obstacles to EV adoption is range-anxiety, or the perceived lack of mileage these vehicles can eke out. Donut Labs EV thinks its new motor can help with that.These donut-shaped in-wheel motors are completely hubless, taking up less space and weighing less than other motors. In fact, clocking in at 88 pounds, this motor weighs about one-third of what the usual EV motor does, which could be a massive boon for an EV's range.Donut Labs also claims that the new motor is 50% less expensive to manufacture than competitors' products, so maybe some of that savings will be passed on to the consumer.Pebble Flow Electric RV Antuan Goodwin/CNETWhile the Pebble Flow RV was on display last year, we found some new quality-of-life upgrades that bring it closer to realization than ever before. This electric RV cab promises to be the perfect pairing for both EVs and diesel guzzlers.The Flow still features a 45-kilowatt-hour battery pack capable of charging at DC charging stations, camp RV hookups, at home or through bidirectional charging with your EV. Now, it also has an optional Magic Pack that will enable you to better control your RV experience when paired with the app.The Magic Pack unlocks remote positioning for the unhitched cab, automatically hitching and unhitching once you've lined up the cab and car, and offering electric assist and regenerative braking when you're towing the RV around.Regenerative braking adds drag to your braking to charge your RV -- not great for EV drivers, as you'll be losing precious range, but perfect for non-EV drivers who are looking to get in on the glamping action. Watch this: First Drive Towing Pebble Flow Electric RV 06:06 New EVsAptera solar EV Antuan Goodwin/CNETThis high-concept vehicle doesn't depend on chargers -- at least, not entirely. A set of solar panels embedded in the hood, roof and rear hatch of the car will net you 40 miles of range based on sunpower alone.While that might not buy you the time to take a leisurely drive, resident car and EV expert Antuan Goodwin says most Americans commute fewer than 40 miles to work each day, meaning the Aptera EV won't need to be charged for day-to-day travel.The lightweight, three-wheel design and small battery help bring the estimated cost down to $40,000, which might make the Aptera EV an attractive alternative for prospective EV buyers when it comes out later this year. CES 2025: 19 Must-See Products That Own Our Eyeballs See all photos The 2026 Afeela 1 EV Sony Honda MobilityA joint venture from Sony and Honda, the Afeela 1 EV combines electric car and console, with a focus on luxury and entertainment.Plenty of antics have drummed up hype for this vehicle in the past. On top of a 91-kilowatt-hour battery with a 300-mile range, the Afeela 1 has been piloted on-stage with a PlayStation 5 Dual Sense controller and summoned by voice command with the vehicle's AI Afeela Personality Assistant.Now we have confirmation on the pricing for the Afeela 1 EV when it launches in 2026: the Signature spec will be available for $102,900 with a full suite of technology and customization. The cheaper Origin model will launch in 2027 for the low, low price of $89,900, coming with a mandatory black paint job, shrunk wheels and the tragic loss of the rear seat screens.
    0 Comments 0 Shares 28 Views
  • WWW.CNET.COM
    Solar-Powered EV, Robot Vacuum With Legs and More Bananas New Stuff From CES 2025
    CES 2025 is in full steam, and we've already seen a flurry of innovative, weird and wonderful new products. We'll keep updating this curated list of the coolest new stuff that delights, inspires and may soon solve very real problems from our homes to the world beyond. Our CNET experts are on the show floor looking at the coolest tech, and this list is for the top products that we spiritedly talk about over our laptops. We'll have more wild stuff as CES continues through Friday.Also, while some of the most eye-popping finds of the show are concepts, there are alsonew products you can buy now(or soon) and have a chuckle reliving the bizarro things we've seen at CES in the past. Antuan Goodwin/CNET Top of mind for every potential EV buyer is how inconvenient charging is -- but the Aptera Solar EV is wrapped in solar panels to recharge while you drive. Forget the cockroach-looking solar-powered cars of yesteryear, as this EV is a svelte three-wheeler with a swooped design that looks like it's about to take off into the sky (that achieves 70% less drag than EV's on the road today). Aptera expects to start producing the $40,000 vehicle later this year, so start planning if a constantly-recharging two-seater EV would fit your lifestyle. I Took a Ride in an EV That Doesn't Need to Plug In. See at Aptera James Martin/CNET Dreame X50 Ultra A robot vacuum with tiny legs to get up ledges or cross door gaps. Roombas and other robot vacuums have been a big hit, but their little wheels can be defeated by the tiniest ledge or threshold between rooms. Enter Dreame's X50 Ultra, which has two short wheeled legs it can deploy to surmount very modest obstacles. No, it won't climb stairs, but we saw it conquer small ledges a couple inches high. This advancement comes at the steep price of $1,699 when it starts shipping in mid-February (preorder it for $390 off). Dreame's Robot Vacuum Won't Be Climbing Stairs, but We Saw It Summit a Small Ledge at CES 2025. Josh Goldman/CNET Lenovo Legion Go S New with added Steam! In addition to a prototype version of the update to its current Legion Go, the company's additions to its Go line of handheld gaming consoles include a couple of brand-new Go S models -- one of which is the first to run SteamOS natively, in addition to the Windows version. Yes, that's right: A Steam Deck alternative! Both models have identical hardware, and the Go S has a more traditional design compared with its somewhat overcomplicated sibling. It's pretty cool, but makes us wonder: where's our Xbox handheld, Microsoft? Lenovo Legion Go S Offers a Welcome, Less Complicated Design Than the Original. Lenovo Lenovo ThinkBook Plus Gen 6 Rollable A clever take on dual-screen laptops Still only a concept, but Lenovo's new laptop extends the screen upward rather than folding it (or folding two screens together) like almost every dual-screen laptop we've seen. We've got no pricing or available for it yet -- it's a real product this, not just a concept or prototype -- but being able to turn a laptop screen from 14 to 16.7 inches in a press of a button sounds like something I want. Wild Displays: Lenovo Shows Off Dual-Screen Yoga Book and Rollable ThinkBook. Watch this: Displace TV's 55-Inch Television Hangs From a Wall Using Suction Cups 03:15 James Martin/CNET Housing renters who want to mount their TV but are wary of drilling into their walls, your ship is about to come in. The Displace TV uses suction cups to stick to the wall and runs off batteries, meaning you can stick it pretty much anywhere in your home or office. It comes in varying sizes, starting with a $1,499 27-inch model and going up to a $4,999 55-inch TV, which will ship in spring 2025. I Suction-Cupped Displace TV's Wireless OLED to a Wall. I'll Never Be the Same. See at Displace Matt Elliott/CNET Nvidia GeForce RTX 50 series graphics cards Bigger on the inside? The new Nvidia cards just jumped more than a generation's worth in their power to render games and perform complex AI image generation, among a lot of other things. And they still fit into the box on your desk and cost about the same as before. The Wait Is Over: Nvidia's Next-Gen RTX 50-Series GPUs Are Here. Watch this: Everything Announced at Nvidia's CES Event in 12 Minutes 11:47 Celso Bulgatti/CNET Samsung stretchable screen concept Horror movies just gained a dimension You know that horror trope where something scary stretches the screen towards you and something awful enters the world? Samsung's turned the stretching screen of our nightmares into reality -- though it could be flowers as much as the undead pushing through. The screen bulges in the middle to produce a 3D effect; it's a little hard to see, according to editor Lisa Eadicicco, but it's there. Samsung's Wild Stretchable Display Concept Turns 2D Into 3D. Lisa Eadicicco/CNET Swippitt A fast way to fill up your phone's charge. And empty your wallet Swippitt's added a twist to the phone battery case: a box that swaps external batteries when you stick your phone in the slot. But it's not for everyone: At $450 for the hub and $120 for the Link case, the Sippitt is more expensive than aPlayStation 5and almost as pricey as an iPhone 16. I Watched a Printer-Size Gadget Add More Battery Life to a Phone in Seconds. Lymow Lymow One Robo-mows your lawn and spits back mulch I'm all for anything that can remove the tedium from everyday (or every-week) tasks, and this one is the first to do away with one of the most tedious homeowner tasks. It not only mows your lawn, it gnaws most of the detritus (including leaves and branches) into lawn food. A New Robot Mower at CES 2025 Can Do Something No Rival Can. Tara Brown/CNET Roborock Saros Z70 A robot vacuum with an arm You may not want, or even care about robot vacuums. If you're looking for the "see" in "must-see," this armed and dangerous (to dirt) robot vacuum has proven mesmerizing to watch. What's the arm for, you ask? It's not just vacuums;it can pick up after you. We Spent Hours Watching a Robot Vacuum Pick Up Socks. It's a Dream Come True. HMD Imagine you're cut off from mobile cell service in the outdoors or when networks are down, but it doesn't matter: HMD's new $200 OffGrid device lets you link your Android or iOS phone to networks of satellites to send texts, check-in messages to loved ones and even send emergency pings. From the ashes of last year's Motorola Defy Satellite Link comes a product with even more features, though you'll need to pay a monthly subscription to use it. Give Any Phone a Texting Hotspot Connection Using a Satellite. Just Like iPhone 16. See at Hmd Watch this: These New Smart Glasses Want to Be Your Next AI Companion 02:31 Cool things we also liked James Martin/CNET LG Signature Smart Instaview A concept appliance putting cameras inside the microwave for all your TikTok and Instagram posts. We've seen smart kitchen appliances, but none that cater to the...influencer crowd? The LG Signature Smart Instaview has cameras inside the microwave to record video of you making your favorite dishes -- and don't worry, there are plenty of sensors that check on how the food is cooking to make sure you don't end up with a smoldering mess. There's also a 27-inch HD display and speakers so you can watch TV while you cook. While only a concept device for now, the Instaview is an intriguing look at how kitchenfluencers are nudging tech forward, too. Home Kitchen & Household LG Built the Perfect Fancy Microwave for Social Media. Circular Circular Ring Gen 2 A smart ring that detects irregular heartbeats to warn ahead of strokes or heart attacks. For years, premium smartwatches have been able to detect atrial fibrillations -- irregular heartbeats that could preclude strokes and cardiac events -- but not everyone wants a smartwatch. Enter the Circular Ring Gen 2, a $380 smart ring that watches out for these AFib events and tracks other health data, will be available to buy in the next couple months. Circular's New Smart Ring Can Detect AFib From Your Finger. James Martin/CNET Samsung's micro LED smartwatch concept A Micro LED display that's so bright you can see it in daylight. For all the smartwatch lovers who can't see their screens in broad daylight, Samsung debuted a concept device showing a next-gen micro LED display that's brighter than any watch you can buy. While it could be years before this reaches a consumer device, it's promising -- just promise to never take it out in a dark theater. Samsung's Micro LED Smartwatch Concept Is the Brightest Screen I've Seen on a Watch. Jon Reed/CNET Roam SodaTop Add fizz on the fly The SodaStream, which lets you create carbonated drinks at home, was a great idea when it launched. But now everyone's in motion and equipped with water bottles, so why should you be able to get your fizz on in only one location? The SodaTop is a cap for compatible water bottles that carbonates water in compatible containers. This Revolutionary Bottle Cap Lets You Make Sparkling Water Anywhere. CES 2025: 19 Must-See Products That Own Our Eyeballs See all photos LeafyPod LeafyPod Feed me, Seymour Don't wait until the soil's cracked or the leaves fall off. LeafyPod is a smart planter that learns the appropriate regimen for the plant you've potted in it, as well as determines if any environmental facts are suboptimal. Then it lets the plant voice its needs via a phone app. It can't do everything -- you've still got to do as you're told. New Smart Planter at CES 2025 Lets Plants Shout When They Need More Water or Light. JSAUX JSAux FlipGo Horizon Every year, multiscreen add-ons The concept of being able to use multiple portable screens with a laptop is one of those elusive dreams that leave us searching for a product we can live with. They're also a CES staple, and at least on Day 1, the FlipGo has caught the attention of the dreamers. Who knows? Maybe this one is it. Dominate Your Coffee Shop With FlipGo Horizon Snap-On Laptop Displays.
    0 Comments 0 Shares 28 Views
  • WWW.CNET.COM
    Meet the Six-Wheel Hybrid With an eVTOL in the Trunk video
    Xpeng Aeroht brought its modular flying car to CES. This six-wheel 'Land Aircraft Carrier' can travel off-road carrying an eVTOL and charge it up for flight at the same time.
    0 Comments 0 Shares 28 Views
  • THEHARDTIMES.NET
    Mark Zuckerberg, Recipient of Worlds First Rat Penis Transplant, Announces Meta Will Stop Fact Checking
    Mark Zuckerberg, Recipient of Worlds First Rat Penis Transplant, Announces Meta Will Stop Fact CheckingBy Matt Husser | January 7, 2025 MENLO PARK, Calif. Meta CEO Mark Zuckerberg, medical pioneer who received the worlds first experimental rat penis transplant, announced today that the social media juggernaut would stop fact checking, sources claimed.Its our duty to maintain the unfiltered free speech that sustains our democracy, and thats why Meta will no longer fact check on any of our social media platforms, said Zuckerberg, concealing his grotesque rat penis transplant scars and a row of engorged pig nipples underneath his trademark t-shirt and jeans. Its simply not our place to moderate important discussions happening on our platform, like this trending Facebook topic about how raw Sasquatch milk is the miracle cure for the Chinese ocular diarrhea outbreak being blown through the US by illegal immigrant wind farms.Facebook user Dr. Johann Sebastian Jovanovi; pioneer in the field of extraterrestrial psychobiology, first man to climb Mt. Everest on the Astral Plane, and Zuckerbergs personal physician; reinforced the importance of not suppressing the truth by fact checking.If our country is to survive, platforms like Facebook and Instagram must remain an unfiltered marketplace for ideasas well as black market animal parts, like the menagerie of exotic animal penises I have personally transplanted onto Mr. Zuckerberg, said Dr. Jovanovi, posting in the Medical Freedom Militia Facebook group. Unfortunately the deep state is working hard to stop the truth from spreading by freezing my crypto wallet. If any of you patriots could help with just $100 in TruthCoin, I could unlock my wallet and continue my vital work to find out what Dr. Faucis hiding in his underwater bioweapon lab.Former Meta fact checker Anthony Gutierrez was saddened to lose his job, but expressed quiet relief that he no longer had to verify the many strange but true claims about the Facebook founder across the social media platforms.For ten years I worked tirelessly to moderate content, but now its simply not my responsibility to verify if Mark Zuckerberg is sexually intimate with a haunted porcelain doll that bears a striking resemblance to himself, said Gutierrez. And so what if he regurgitates Soylent meal replacement shakes into piles of loose hay to craft a nest in the rafters of Meta headquarters for his nightly slumber? And frankly, what he does in his private Metaverse server Zuckys World with all those Teletubby avatars is his business.At press time, Zuckerberg had reportedly died after a longtime battle with werewolf gonorrhea.
    0 Comments 0 Shares 28 Views
  • WWW.NINTENDOLIFE.COM
    'Switch 2' Ergonomic Grip Case Surfaces On Amazon
    Ok...This is now getting really ridiculous, we have never seen anything alike this before in the console gaming history.I'll stop on commenting now. I'll just wait and see what will happen now. I still finds this unbelievable since they're actively breaking NDAs."Switch 2" is a potentially $10b industry and literally every accessory makers are now leaking it!? Are they really risking their existences? Nintendo could easily ended their business forever and making those people debt riddled for the rest of their life, is it indeed worth?Also I finds this very difficult to stomach about how incredibly similar "Switch 2" (if it's real) and Switch looks. It doesn't matters if it's called "Switch 2", it still will confuse many because of its similarity.None of PSX, PS2, PS3, PS4 or PS5 looks anything alike apart of each other.Well I'll wait and see if I were right all time along or that I were dead wrong.Seriously, I can't wait until we sees it for real. This is easily my most anticipated gaming console ever since I began to play video-games in 1987 when NES came to my country.One last thing... Why is it only accessory makers who're "leaking" it now, and not one singe game developers this so far. This is really interesting!
    0 Comments 0 Shares 28 Views
  • TECHCRUNCH.COM
    Indian government websites are still redirecting users to scam sites
    Some Indian government websites continue to allow the planting of scammy links on their official domains months after TechCrunch reported the issue last year.TechCrunch found more than 90 gov.in website links associated with Indian government departments, including the Indian Council of Agricultural Research and India Post, as well as state governments and councils of Haryana and Maharashtra and others, were redirecting to sites linked to online betting and investment scams. Search engines like Google have indexed the scam links hosted on government sites, increasing the risk of regular internet users finding them.Several search results showing compromised Indian government websites hosting scam sites.In May, TechCrunch reported that around four dozen Indian government website links were redirecting to online betting platforms. Indias cyber agency, the Computer Emergency Response Team, known as CERT-In, escalated the matter at the time. However, it remained unclear whether the government had fixed the underlying flaw that the scammers were exploiting to plant their links.Deedy Das of Menlo Ventures, among others, posted on social media platform X this week about the issue resurfacing, indicating that the hacked pages are widespread.Security researcher Bob Diachenko told TechCrunch that the issue may have resurfaced due to a compromise in the websites content management system (CMS) or server configurations.If only the symptoms (e.g., malicious content) are removed without addressing the root cause (e.g., vulnerability or backdoor), attackers can reintroduce the issue, Diachenko said, adding, It is not a very challenging exercise but requires some downtime and efforts.Earlier this week, TechCrunch contacted CERT-In with a few affected links. The agency did not respond to the email, though the links started showing a page not found error at around the time of publication.
    0 Comments 0 Shares 29 Views