• WWW.MARKTECHPOST.COM
    Step by Step Coding Guide to Build a Neural Collaborative Filtering (NCF) Recommendation System with PyTorch
    This tutorial will walk you through using PyTorch to implement a Neural Collaborative Filtering (NCF) recommendation system. NCF extends traditional matrix factorisation by using neural networks to model complex user-item interactions. Introduction Neural Collaborative Filtering (NCF) is a state-of-the-art approach for building recommendation systems. Unlike traditional collaborative filtering methods that rely on linear models, NCF utilizes deep learning to capture non-linear relationships between users and items. In this tutorial, we’ll: Prepare and explore the MovieLens dataset Implement the NCF model architecture Train the model Evaluate its performance Generate recommendations for users Setup and Environment First, let’s install the necessary libraries and import them: !pip install torch numpy pandas matplotlib seaborn scikit-learn tqdm import os import numpy as np import pandas as pd import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from tqdm import tqdm import random torch.manual_seed(42) np.random.seed(42) random.seed(42) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"Using device: {device}") Data Loading and Preparation We’ll use the MovieLens 100K dataset, which contains 100,000 movie ratings from users: !wget -nc https://files.grouplens.org/datasets/movielens/ml-100k.zip !unzip -q -n ml-100k.zip ratings_df = pd.read_csv('ml-100k/u.data', sep='t', names=['user_id', 'item_id', 'rating', 'timestamp']) movies_df = pd.read_csv('ml-100k/u.item', sep='|', encoding='latin-1', names=['item_id', 'title', 'release_date', 'video_release_date', 'IMDb_URL', 'unknown', 'Action', 'Adventure', 'Animation', 'Children', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western']) print("Ratings data:") print(ratings_df.head()) print("nMovies data:") print(movies_df[['item_id', 'title']].head()) print(f"nTotal number of ratings: {len(ratings_df)}") print(f"Number of unique users: {ratings_df['user_id'].nunique()}") print(f"Number of unique movies: {ratings_df['item_id'].nunique()}") print(f"Rating range: {ratings_df['rating'].min()} to {ratings_df['rating'].max()}") print(f"Average rating: {ratings_df['rating'].mean():.2f}") plt.figure(figsize=(10, 6)) sns.countplot(x='rating', data=ratings_df) plt.title('Distribution of Ratings') plt.xlabel('Rating') plt.ylabel('Count') plt.show() ratings_df['label'] = (ratings_df['rating'] >= 4).astype(np.float32) Data Preparation for NCF Now, let’s prepare the data for our NCF model: train_df, test_df = train_test_split(ratings_df, test_size=0.2, random_state=42) print(f"Training set size: {len(train_df)}") print(f"Test set size: {len(test_df)}") num_users = ratings_df['user_id'].max() num_items = ratings_df['item_id'].max() print(f"Number of users: {num_users}") print(f"Number of items: {num_items}") class NCFDataset(Dataset): def __init__(self, df): self.user_ids = torch.tensor(df['user_id'].values, dtype=torch.long) self.item_ids = torch.tensor(df['item_id'].values, dtype=torch.long) self.labels = torch.tensor(df['label'].values, dtype=torch.float) def __len__(self): return len(self.user_ids) def __getitem__(self, idx): return { 'user_id': self.user_ids[idx], 'item_id': self.item_ids[idx], 'label': self.labels[idx] } train_dataset = NCFDataset(train_df) test_dataset = NCFDataset(test_df) batch_size = 256 train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) Model Architecture Now we’ll implement the Neural Collaborative Filtering (NCF) model, which combines Generalized Matrix Factorization (GMF) and Multi-Layer Perceptron (MLP) components: class NCF(nn.Module): def __init__(self, num_users, num_items, embedding_dim=32, mlp_layers=[64, 32, 16]): super(NCF, self).__init__() self.user_embedding_gmf = nn.Embedding(num_users + 1, embedding_dim) self.item_embedding_gmf = nn.Embedding(num_items + 1, embedding_dim) self.user_embedding_mlp = nn.Embedding(num_users + 1, embedding_dim) self.item_embedding_mlp = nn.Embedding(num_items + 1, embedding_dim) mlp_input_dim = 2 * embedding_dim self.mlp_layers = nn.ModuleList() for idx, layer_size in enumerate(mlp_layers): if idx == 0: self.mlp_layers.append(nn.Linear(mlp_input_dim, layer_size)) else: self.mlp_layers.append(nn.Linear(mlp_layers[idx-1], layer_size)) self.mlp_layers.append(nn.ReLU()) self.output_layer = nn.Linear(embedding_dim + mlp_layers[-1], 1) self.sigmoid = nn.Sigmoid() self._init_weights() def _init_weights(self): for m in self.modules(): if isinstance(m, nn.Embedding): nn.init.normal_(m.weight, mean=0.0, std=0.01) elif isinstance(m, nn.Linear): nn.init.kaiming_uniform_(m.weight) if m.bias is not None: nn.init.zeros_(m.bias) def forward(self, user_ids, item_ids): user_embedding_gmf = self.user_embedding_gmf(user_ids) item_embedding_gmf = self.item_embedding_gmf(item_ids) gmf_vector = user_embedding_gmf * item_embedding_gmf user_embedding_mlp = self.user_embedding_mlp(user_ids) item_embedding_mlp = self.item_embedding_mlp(item_ids) mlp_vector = torch.cat([user_embedding_mlp, item_embedding_mlp], dim=-1) for layer in self.mlp_layers: mlp_vector = layer(mlp_vector) concat_vector = torch.cat([gmf_vector, mlp_vector], dim=-1) prediction = self.sigmoid(self.output_layer(concat_vector)).squeeze() return prediction embedding_dim = 32 mlp_layers = [64, 32, 16] model = NCF(num_users, num_items, embedding_dim, mlp_layers).to(device) print(model) Training the Model Let’s train our NCF model: criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5) def train_epoch(model, data_loader, criterion, optimizer, device): model.train() total_loss = 0 for batch in tqdm(data_loader, desc="Training"): user_ids = batch['user_id'].to(device) item_ids = batch['item_id'].to(device) labels = batch['label'].to(device) optimizer.zero_grad() outputs = model(user_ids, item_ids) loss = criterion(outputs, labels) loss.backward() optimizer.step() total_loss += loss.item() return total_loss / len(data_loader) def evaluate(model, data_loader, criterion, device): model.eval() total_loss = 0 predictions = [] true_labels = [] with torch.no_grad(): for batch in tqdm(data_loader, desc="Evaluating"): user_ids = batch['user_id'].to(device) item_ids = batch['item_id'].to(device) labels = batch['label'].to(device) outputs = model(user_ids, item_ids) loss = criterion(outputs, labels) total_loss += loss.item() predictions.extend(outputs.cpu().numpy()) true_labels.extend(labels.cpu().numpy()) from sklearn.metrics import roc_auc_score, average_precision_score auc = roc_auc_score(true_labels, predictions) ap = average_precision_score(true_labels, predictions) return { 'loss': total_loss / len(data_loader), 'auc': auc, 'ap': ap } num_epochs = 10 history = {'train_loss': [], 'val_loss': [], 'val_auc': [], 'val_ap': []} for epoch in range(num_epochs): train_loss = train_epoch(model, train_loader, criterion, optimizer, device) eval_metrics = evaluate(model, test_loader, criterion, device) history['train_loss'].append(train_loss) history['val_loss'].append(eval_metrics['loss']) history['val_auc'].append(eval_metrics['auc']) history['val_ap'].append(eval_metrics['ap']) print(f"Epoch {epoch+1}/{num_epochs} - " f"Train Loss: {train_loss:.4f}, " f"Val Loss: {eval_metrics['loss']:.4f}, " f"AUC: {eval_metrics['auc']:.4f}, " f"AP: {eval_metrics['ap']:.4f}") plt.figure(figsize=(12, 4)) plt.subplot(1, 2, 1) plt.plot(history['train_loss'], label='Train Loss') plt.plot(history['val_loss'], label='Validation Loss') plt.title('Loss During Training') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.subplot(1, 2, 2) plt.plot(history['val_auc'], label='AUC') plt.plot(history['val_ap'], label='Average Precision') plt.title('Evaluation Metrics') plt.xlabel('Epoch') plt.ylabel('Score') plt.legend() plt.tight_layout() plt.show() torch.save(model.state_dict(), 'ncf_model.pth') print("Model saved successfully!") Generating Recommendations Now let’s create a function to generate recommendations for users: def generate_recommendations(model, user_id, n=10): model.eval() user_ids = torch.tensor([user_id] * num_items, dtype=torch.long).to(device) item_ids = torch.tensor(range(1, num_items + 1), dtype=torch.long).to(device) with torch.no_grad(): predictions = model(user_ids, item_ids).cpu().numpy() items_df = pd.DataFrame({ 'item_id': range(1, num_items + 1), 'score': predictions }) user_rated_items = set(ratings_df[ratings_df['user_id'] == user_id]['item_id'].values) items_df = items_df[~items_df['item_id'].isin(user_rated_items)] top_n_items = items_df.sort_values('score', ascending=False).head(n) recommendations = pd.merge(top_n_items, movies_df[['item_id', 'title']], on='item_id') return recommendations[['item_id', 'title', 'score']] test_users = [1, 42, 100] for user_id in test_users: print(f"nTop 10 recommendations for user {user_id}:") recommendations = generate_recommendations(model, user_id, n=10) print(recommendations) print(f"nMovies that user {user_id} has rated highly (4-5 stars):") user_liked = ratings_df[(ratings_df['user_id'] == user_id) & (ratings_df['rating'] >= 4)] user_liked = pd.merge(user_liked, movies_df[['item_id', 'title']], on='item_id') user_liked[['item_id', 'title', 'rating']] Evaluating the Model Further Let’s evaluate our model further by computing some additional metrics: def evaluate_model_with_metrics(model, test_loader, device): model.eval() predictions = [] true_labels = [] with torch.no_grad(): for batch in tqdm(test_loader, desc="Evaluating"): user_ids = batch['user_id'].to(device) item_ids = batch['item_id'].to(device) labels = batch['label'].to(device) outputs = model(user_ids, item_ids) predictions.extend(outputs.cpu().numpy()) true_labels.extend(labels.cpu().numpy()) from sklearn.metrics import roc_auc_score, average_precision_score, precision_recall_curve, accuracy_score binary_preds = [1 if p >= 0.5 else 0 for p in predictions] auc = roc_auc_score(true_labels, predictions) ap = average_precision_score(true_labels, predictions) accuracy = accuracy_score(true_labels, binary_preds) precision, recall, thresholds = precision_recall_curve(true_labels, predictions) plt.figure(figsize=(10, 6)) plt.plot(recall, precision, label=f'AP={ap:.3f}') plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision-Recall Curve') plt.legend() plt.grid(True) plt.show() return { 'auc': auc, 'ap': ap, 'accuracy': accuracy } metrics = evaluate_model_with_metrics(model, test_loader, device) print(f"AUC: {metrics['auc']:.4f}") print(f"Average Precision: {metrics['ap']:.4f}") print(f"Accuracy: {metrics['accuracy']:.4f}") Cold Start Analysis Let’s analyze how our model performs for new users or users with few ratings (cold start problem): user_rating_counts = ratings_df.groupby('user_id').size().reset_index(name='count') user_rating_counts['group'] = pd.cut(user_rating_counts['count'], bins=[0, 10, 50, 100, float('inf')], labels=['1-10', '11-50', '51-100', '100+']) print("Number of users in each rating frequency group:") print(user_rating_counts['group'].value_counts()) def evaluate_by_user_group(model, ratings_df, user_groups, device): results = {} for group_name, user_ids in user_groups.items(): group_ratings = ratings_df[ratings_df['user_id'].isin(user_ids)] group_dataset = NCFDataset(group_ratings) group_loader = DataLoader(group_dataset, batch_size=256, shuffle=False) if len(group_loader) == 0: continue model.eval() predictions = [] true_labels = [] with torch.no_grad(): for batch in group_loader: user_ids = batch['user_id'].to(device) item_ids = batch['item_id'].to(device) labels = batch['label'].to(device) outputs = model(user_ids, item_ids) predictions.extend(outputs.cpu().numpy()) true_labels.extend(labels.cpu().numpy()) from sklearn.metrics import roc_auc_score try: auc = roc_auc_score(true_labels, predictions) results[group_name] = auc except: results[group_name] = None return results user_groups = {} for group in user_rating_counts['group'].unique(): users_in_group = user_rating_counts[user_rating_counts['group'] == group]['user_id'].values user_groups[group] = users_in_group group_performance = evaluate_by_user_group(model, test_df, user_groups, device) plt.figure(figsize=(10, 6)) groups = [] aucs = [] for group, auc in group_performance.items(): if auc is not None: groups.append(group) aucs.append(auc) plt.bar(groups, aucs) plt.xlabel('Number of Ratings per User') plt.ylabel('AUC Score') plt.title('Model Performance by User Rating Frequency (Cold Start Analysis)') plt.ylim(0.5, 1.0) plt.grid(axis='y', linestyle='--', alpha=0.7) plt.show() print("AUC scores by user rating frequency:") for group, auc in group_performance.items(): if auc is not None: print(f"{group}: {auc:.4f}") Business Insights and Extensions def analyze_predictions(model, data_loader, device): model.eval() predictions = [] true_labels = [] with torch.no_grad(): for batch in data_loader: user_ids = batch['user_id'].to(device) item_ids = batch['item_id'].to(device) labels = batch['label'].to(device) outputs = model(user_ids, item_ids) predictions.extend(outputs.cpu().numpy()) true_labels.extend(labels.cpu().numpy()) results_df = pd.DataFrame({ 'true_label': true_labels, 'predicted_score': predictions }) plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) sns.histplot(results_df['predicted_score'], bins=30, kde=True) plt.title('Distribution of Predicted Scores') plt.xlabel('Predicted Score') plt.ylabel('Count') plt.subplot(1, 2, 2) sns.boxplot(x='true_label', y='predicted_score', data=results_df) plt.title('Predicted Scores by True Label') plt.xlabel('True Label (0=Disliked, 1=Liked)') plt.ylabel('Predicted Score') plt.tight_layout() plt.show() avg_scores = results_df.groupby('true_label')['predicted_score'].mean() print("Average prediction scores:") print(f"Items user disliked (0): {avg_scores[0]:.4f}") print(f"Items user liked (1): {avg_scores[1]:.4f}") analyze_predictions(model, test_loader, device) This tutorial demonstrates implementing Neural Collaborative Filtering, a deep learning recommendation system combining matrix factorization with neural networks. Using the MovieLens dataset and PyTorch, we built a model that generates personalized content recommendations. The implementation addresses key challenges, including the cold start problem and provides performance metrics like AUC and precision-recall curves. This foundation can be extended with hybrid approaches, attention mechanisms, or deployable web applications for various business recommendation scenarios. Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models)Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/MMSearch-R1: End-to-End Reinforcement Learning for Active Image Search in LMMsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Anthropic’s Evaluation of Chain-of-Thought Faithfulness: Investigating Hidden Reasoning, Reward Hacks, and the Limitations of Verbal AI Transparency in Reasoning ModelsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Building Your AI Q&A Bot for Webpages Using Open Source AI Models
    0 Комментарии 0 Поделились 106 Просмотры
  • WWW.IGN.COM
    SpongeBob Tower Defense Codes (April 2025)
    Last updated on April 10, 2025 - Added new SpongeBob Tower Defense codes!Looking for SpongeBob Tower Defense codes? This is where we'll serve them up! You won't find any Krabby Patties here, but you will find working codes you can redeem for rewards. These include chests, Magic Conches for summoning exclusive units, gems, and more. Working SpongeBob Tower Defense Codes (April 2025)Below are all the active codes we've found for SpongeBob TD as of this month:AChallenge2Open - 2x Challenge Crate (NEW)FUNonaTuesday - 10x Rare+ Chest, 1x Challenge Crate, 2x Double Faction Points (1 Hour), 2x Double Challenge Tokens (1 Hour) (NEW)500KFeelingOkay - 2,000 Gems, 10x Rare+ Chests, 1x Magic ConchExpired SpongeBob Tower Defense CodesUnfortunately, you've missed the chance to use the codes below, as they've now expired:OnlytheBestUnitRefinementFactionGrind4RealCrateOfPossibilitiesStreamLoyalistOnlytheBestDelayisOkayFoolMeTwiceOPCode4RealRealOPCodeVeryOP200MILLIPLAYSWOAHAcceptingTheChallengeWStreamChatXMarksTheSpotSummonMeASecretPLS EDUCATEDCHALLENGERBugCrushers THANKSFORSUPPORTINGDoubleItTheHuntBegins1MillionLikesOPLetsRidePiratesLife4MeStacksonStacksWPatchChatHow to Redeem SpongeBob TD CodesLog in to the SpongeBob Tower Defense experience on RobloxYou'll need to play the game until you reach Level 10 before you can unlock the codes featureOn the left side of the screen, you'll see several colorful boxesThe purple box with the clam icon on the bottom left corner is for codesClick Codes and then copy and paste your code into the boxRedeem it and enjoy the rewards!Why Isn't My SpongeBob TD Code Working?There are two main reasons why your code might not be working when you submit it. Some codes are for Roblox are case-sensitive, so you'll want to make sure you're copying it directly from this article and pasting it in. We test each and every one of the codes before adding them here, to make sure they're working. Just be sure when you are copying them from here, that you're not accidentally including any extra spaces. If one has snuck in there, just remove them and try the code again.If you're taking them directly from the article, and they're still not working, the other possibility is they're expired. When a code is expired, it will say this as soon as you hit redeem. But if the code has been entered incorrectly, it will say "invalid" instead.How to Get More SpongeBob Tower Defense CodesWe check daily for new Roblox codes, so be sure to come back here regularly to see if new SpongeBob TD codes have dropped. Alternatively, you can drop by the Krabby Krew Discord server and scan for codes yourself. What is SpongeBob Tower Defense in Roblox?Roblox has its fair share of Tower Defense games, and even SpongeBob is included in the action. Whether you choose to initially recruit SpongeBob, Patrick, or Squidward, you'll need to protect Bikini Bottom at all costs. Just like other TD games, the aim is to position units carefully along a set route, where they'll automatically attack and wipe out a constant stream of enemies. With careful strategy, you'll complete a series of waves for each location, and prevent enemies from ever reaching your base. Unlock new units, pick your favorites to take into battle, and unleash them when the time is right. Lauren Harper is an Associate Guides Editor. She loves a variety of games but is especially fond of puzzles, horrors, and point-and-click adventures.
    0 Комментарии 0 Поделились 95 Просмотры
  • 9TO5MAC.COM
    Security Bite: Down the rabbit hole of neat, lesser-known Terminal commands (Pt. 1)
    9to5Mac Security Bite is exclusively brought to you by Mosyle, the only Apple Unified Platform. Making Apple devices work-ready and enterprise-safe is all we do. Our unique integrated approach to management and security combines state-of-the-art Apple-specific security solutions for fully automated Hardening & Compliance, Next Generation EDR, AI-powered Zero Trust, and exclusive Privilege Management with the most powerful and modern Apple MDM on the market. The result is a totally automated Apple Unified Platform currently trusted by over 45,000 organizations to make millions of Apple devices work-ready with no effort and at an affordable cost. Request your EXTENDED TRIAL today and understand why Mosyle is everything you need to work with Apple. I’ve recently found myself down the rabbit hole of lesser-known Terminal features. These past months, I covered everything from enabling Touch ID for sudo authentication to cleaning up public Wi-Fi connections stored on your Mac. But this past week, I journeyed deeper and found even more neat features you probably didn’t know Terminal could do, and I’m not talking ping command here. In this edition of Security Bite, allow me to elevate your command line prowess further. You might be wondering, “What does this have to do with security?” Fair question—this is a 9to5Mac Apple security column, after all. While not all commands below aren’t explicitly security-focused, they could help you work smarter, increase your efficiency, and allow you to show off some genuinely useful tricks. Being proficient in Terminal helps think like the system. Caffeinate your Mac Even your Mac needs coffee. Say you’re downloading or processing a large file and need your computer to stay awake while you step away, use caffeinate on Terminal to do this quickly. Now, your Mac will stay awake indefinitely, allowing you to step away without worry that the process could be interrupted. When you come back, press Ctrl+C to exit. This will return you to the normal state. Additionally, you can caffeinate your Mac for a specified amount of time by using caffeinate -t <time>, where <time> denotes seconds. For example, if you wanted your Mac to stay awake for 1 hour, replace <time> with 3600. Change default screenshot file name If you’re like me, Screenshot is one of the most frequently used utilities on my Mac. But by default, each screenshot begins with “Screenshot” in the file name, which can get a little confusing when you’re working between applications and taking captures of each. Instead of having dozens of “Screenshot” file names on your desktop, you can use this command to name them based on the tasks you’re working on. defaults write com.apple.screencapture name <Name> Replace <Name> with whatever you’d like to help you identify the screenshot files more easily. For example, I used “Security Bite” below. Now, all screenshots I take will start with this until I change it back to the default or something else entirely. In addition, we can also tweak the file types. Screenshots will default to png, which is great for image quality but can take up more space than one would like. Supported formats include PNG, JPG, PDF, GIF, and TIFF. For example, if you’d like to save a screenshot of a PDF, do this: defaults write com.apple.screencapture name SecurityBite type pdf Clear DNS cache When you load a website, macOS stores its IP address in a local DNS cache. This behind-the-scenes database allows Safari and other browsers to resolve domain names faster, skipping the need for a full DNS lookup every visit. However, the cache can become outdated or bloated over time, occasionally leading to issues like slow page loads or errors such as “DNS Server Not Responding.” While macOS does flush the DNS cache automatically from time to time, you can manually force a refresh when troubleshooting connectivity problems or after changing DNS settings. To manually flush the DNS cache, enter the following commands in order: sudo killall -HUP mDNSResponder (password required) sudo killall mDNSResponderHelper sudo dscacheutil -flushcache Text to voice from the command line This one is plain fun. On Terminal, type say "hello world", and press return. Your Mac will read it back to you. If you do not like the default voice, you can change it by adding -v followed by the name of the voice you’d like. Type: say -v"?" to get a full list of all the available voices. For example, you like Tina’s voice. Now type: say "Type anything you like here" -v Tina Additionally, you can save the speech to a file by using: say "Type anything you like here" -v Tina -o <Filename>.<Extension> Supported file types include aiff, caff, m4a, and wave This will save to Macintosh HD < Users < your name I’m labeling this column as “Part 1” because I’m confident I’ll have more to share in the coming weeks. Comment below if you found any of these useful. Is there any I should add? Follow Arin: Twitter/X, LinkedIn, Threads Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии 0 Поделились 80 Просмотры
  • FUTURISM.COM
    ICE Deletes Post About Stopping the Flow of Illegal "Ideas"
    Immigration and Customs Enforcement (ICE) has been forced to delete a graphic claiming that its job is to stop illegal "ideas" from entering the United States."If it crosses the border illegally, it's our job to STOP IT," the graphic on ICE's since-deleted X post reads. Among the things that could "illegally" enter American soil, the agency listed "people, money, products, ideas."In a statement to Futurism and other outlets, ICE media lead Mike Alvarez insisted that the graphic was posted in error."That post was sent without proper approval and should not have been shared," Alvarez told us. "'Ideas' should have said 'intellectual property.'"Ahead of that statement, however, ICE received ample backlash from those perturbed by the concept of ideas being considered illegal in a country founded on principles of free speech and religious assembly."The idea that ideas can be illegal," columnist David Rothkopf posted on Bluesky, "is actually the only idea that's illegal in this country."Chillingly, ICE's deleted thought police post was made around the same time that the Associated Press reported that Homeland Security, the agency's parent organization, raised the specter of anti-semitism when pressed for evidence on a contentious student deportation.In a memo leaked to the AP, Homeland Security lawyers claimed that Columbia student and organizer Mahmoud Khalil — a Palestinian-Syrian green card-holder who was forcibly taken into ICE custody in March on State Department orders that his visa be revoked — was guilty of no actual crime, save for his beliefs regarding Israel and Palestine.Considered one of the leaders of Columbia's pro-Palestine encampment movement last year, Khalil was, as State Department secretary Marco Rubio put it in a document cited by the memo, participating in "otherwise lawful" activities amid his student organizing. Because he's not a full citizen, however, his activities fall under the purview of the State Department and HomeSec — and, they claim, "severely undermine" the American objective of combating "anti-Semitism around the world and in the United States."As the AP notes, the 30-year-old student activist maintained his innocence in a letter sent from the ICE facility he's being held at in Louisiana."The Trump administration is targeting me as part of a broad strategy to suppress dissent," Khalil wrote in that March 18 missive. "Visa-holders, green-card carriers, and citizens alike will all be targeted for their political beliefs."It seems, unfortunately, that he was right — especially because several other visa and green card-holding students have also been similarly detained and deported since Khalil was arrested in the middle of the night on March 8, and hundreds more have had their student visas revoked.In other words, ICE's graphic was unintentionally too honest: the thought police are real — and proud of it.More on thought crimes: FBI Launches Task Force to Protect TeslaShare This Article
    0 Комментарии 0 Поделились 87 Просмотры
  • WWW.CNET.COM
    Today's NYT Mini Crossword Answers for Saturday, April 12
    Here are the answers for The New York Times Mini Crossword for April 12.
    0 Комментарии 0 Поделились 92 Просмотры
  • WWW.NINTENDOLIFE.COM
    Mario Kart World Is Reportedly The Smoothest Entry Yet
    Just imagine racing at 120fps.We're slowly finding out more about how certain games will perform on the Switch 2 and now there's a new piece of information doing the rounds about the frame rate in Mario Kart World.Read the full article on nintendolife.com
    0 Комментарии 0 Поделились 96 Просмотры
  • TECHCRUNCH.COM
    Meta adds Stripe CEO Patrick Collison and Dina Powell McCormick to its board
    In Brief Posted: 2:29 PM PDT · April 11, 2025 Image Credits:David Paul Morris/Bloomberg / Getty Images Meta adds Stripe CEO Patrick Collison and Dina Powell McCormick to its board Banking executive and former presidential advisor Dina Powell McCormick and Stripe co-founder and CEO Patrick Collison are joining the board of Meta, according to a report from Axios. Powell McCormick spent 16 years in several leadership roles at Goldman Sachs. She also served as Deputy National Security Advisor to President Donald J. Trump during his first administration. Collision started digital payments giant Stripe, which was valued at $91.5 billion earlier this year, with his brother John in 2010. The appointments are seen as a broader move by Meta to “expand its board to include more global business experts” at a time “the company looks to curry favor with the Trump administration” ahead of an antitrust trial that begins Monday, reported Axios. Collison previously served on Meta’s advisory group. Powell McCormick is new to Meta. Their appointments will be effective April 15. Topics
    0 Комментарии 0 Поделились 112 Просмотры
  • WWW.FORBES.COM
    ‘A Minecraft Movie’ On Pace For $80 Million Box Office In Weekend 2
    Jack Black and Jason Momoa are looking to mine another $80 million for "A Minecraft Movie" in its second weekend at the domestic box office.
    0 Комментарии 0 Поделились 86 Просмотры
  • WWW.DIGITALTRENDS.COM
    Top electronics to buy (and skip) at Costco and Sam’s Club
    Table of Contents Table of Contents Best buys categories Categories to be cautious of Conclusion If you’re following our ongoing saga dedicated to investigating Costco and Sam’s Club for electronics, you may think that we’re under the impression that all electronics are best purchased at these warehouse stores. That’s not the case. There are some categories of products that tend to be great to buy at these stores and others that you should avoid. Here, we’re aiming to provide a comprehensive warehouse club electronics buying guide, so you can stick to the good stuff and avoid the duds. We’re hitting all the major product categories and things you’ll see, with some helpful tips and notes along the way. Chris Hagan / Digital Trends So, what are the best electronics to buy at Costco? What about the best electronics to buy at Sam’s Club? While you can find good deals on electronics throughout these and other warehouse stores, some categories are just better than others, plain and simple. Here are the electronics categories that we think you should spend extra time on when shopping at Costco and Sam’s Club. TVs (large screen) TVs and large screen TVs are some of the smartest electronics buys at Costco and Sam’s Club. Bulk purchasing power on these (often) mass-produced goods result in competitive pricing. Plus, a larger TV is often right at the price point where you start to need to get whatever discounts on them you can get. But, it’s trickier than that and worth a deeper look. Most makes of TVs have several models, with different sizes and (sometimes) subtly different features. Once you get the base technology, UI, and software/hardware combo just right, making TVs in different sizes is apparently not nearly as giant as a leap as you might expect it to be. Unfortunately, each model also comes with a sort of minimum acceptable price — a price that retailers can’t go below. However, we’ve found that warehouse retailers can pass along special savingsunique warehouse models just for them in exchange for buying mass quantities of said models. Then, you can get some of those savings passed along to you. When it works right, it is a win-win-win scenario. And while, obviously, this kind of deal can work with any product, it is just especially noticeable for TVs due to the (already) large number of models that each line uses.   Smart home devices (bundles and kits) Smart home devices that include cameras, Wi-Fi extenders, and home security tend to come in bundles. Those bundles can give you bigger savings. For example, as we browse we currently see some quality Reolink bundles. Checking Amazon, we see that similar bundles with roughly equivalent stats are more expensive. That being said, don’t immediately slam the breaks on good old Amazon — we’re largely finding different bundles, they just have similar stats and camera numbers and come from the same brand. But, when you compare two sets of eight security cameras from the same brand, both with 12MP cameras and 4TB of storage, a $450 price differential is a big deal. The bottom line here is that there is the potential to pay significantly less at a warehouse store. But it isn’t quite the same as TVs. If you seek true clarity on a dollar per value gained perspective, you’ll have to do deeper research. Accessories (Cables, batteries, etc.) Costco and Sam’s Club may buy things in corporate level bulk, but you can buy from them in family level bulk and still get great savings. This can be especially convenient on things like batteries, wires, and cables that you buy a lot of at one time and are relatively tiny (i.e. you won’t need to take up garage space storing them all). As an example, we’re seeing a 40 pack of Duracell AA Power Boost batteries is $12 cheaper at Costco than the competition. If you have a Costco or Sam’s Club membership and need some small, consumable thing like this, it is always worth giving a quick search on the site to see what is available. Andy Boxall / Digital Trends Even if you agree with us that Costco and Sam’s Club are excellent for electronics, it doesn’t mean that you should go blindly forth and buy everything that you see. As a result, this half of our warehouse retailer electronics guide is dedicated to the electronics to avoid at Costco and the electronics to skip at Sam’s Club. High-end, specialty electronics One of the big advantages of warehouse stores is their potential to distribute savings to you that they get from gigantic bulk buys. High-end and specialty items don’t have that bulk, so probably won’t get that advantage. Be cautious here. Popular consumer tech (headphones, laptops, etc.) You might’ve come into this just assuming that you’d be able to find better deals on an Asus Vivobook or pair of Bose headphones with no problem at Costco or Sam’s Club. This was the base assumption we started with, too. However, as the search was on, finding products that matched up with brands and the same processors, RAM, and storage (product names alone don’t always tell the full story due to the way products are labelled for Amazon success, for instance), we found that finding the best deal on a warehouse site was not easy. There were many times in the search that I’d get excited for a moment — “Oh, the Vivobook 15 here is even cheaper than the one on the Asus website!” — only to keep looking and comparing and find some critical stat was way different. There always seemed to be a better or equivalent price elsewhere, and the number of times that something looked like it had a great price at a warehouse store but was on sale at a place like Best Buy for even lower was way too high. Even marked down items often struggled to make a case for themselves as a clearly superior buy at the warehouse store. This all being said, we don’t think you should skip Costco or Sam’s Club altogether for these items. If the price is equivalent, that can actually bend in your favor if you’re getting 2% back from your membership card, for example. At the end of the day, for many product categories your Costco, Sam’s Club, or other warehouse retailer is just that, another retailer — not a magical dispensary of cheaper goods. Be sure to always check the price differences between Costco, Sam’s Club, and other stores. Just because you’re a member, it doesn’t mean you should ignore Amazon, Best Buy, Walmart, and brand sites altogether. Keep your due diligence high. As we begin to introduce Costco and Sam’s Club deals into our regular coverage, we’ll do the same.
    0 Комментарии 0 Поделились 89 Просмотры
  • ARSTECHNICA.COM
    Powerful programming: BBC-controlled electric meters are coming to an end
    Imagine the conspiracy theories in the US Powerful programming: BBC-controlled electric meters are coming to an end Customers are being pushed to smart meters that have their own signal problems. Kevin Purdy – Apr 11, 2025 1:43 pm | 45 An Economy 7 meter and accompanying Radio Teleswitch. Credit: Richard Harvey (Public domain) An Economy 7 meter and accompanying Radio Teleswitch. Credit: Richard Harvey (Public domain) Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Radio signal broadcasts have their usefulness, but they eventually end (except, perhaps, for SETI). Every so often, we mark the public end of a once-essential wavelength, such as 3G cellular, analog television, or the Canadian time check. One of the most weirdly useful signals will soon end in the United Kingdom, with notable consequences if its transition is not properly handled. Beginning in the early 1980s, UK homes could have electrical meters installed with a radio teleswitch attached. These switches listened for a 198 kHz signal from the BBC's Radio 4 Long Wave service, primarily broadcast from the powerful Droitwich Transmitting Station. These switches listened to 30 messages per minute, waiting for a certain 50-bit data packet to arrive that signaled that electricity was now at cheaper, off-peak rates ("tariffs" in the UK). With this over-the-air notice, homes that bought into Economy 7 or Economy 10 (7 or 10 hours of reduced-price power) could make use of ceramic-stuffed storage heaters that stayed warm into the day, prepare hot water heaters, and otherwise make use of off-peak power. How the electrical companies, BBC, and meters worked together is fascinating in its own right and documented in a recent video by Ringway Manchester (which we first saw at Hackaday). Ringway Manchester's useful history of, and explainer for, the Radio Teleswitch Signal carried for decades by BBC Radio 4. Very fragile tungsten linchpins But BBC Radio 4's Long Wave transmissions are coming to an end, due to both modern realities and obscure glass valves. Two rare tungsten-centered, hand-crafted cooled anode modulators (CAM) are needed to keep the signal going, and while the BBC bought up the global supply of them, they are running out. The service is seemingly on its last two valves and has been telling the public about Long Wave radio's end for nearly 15 years. Trying to remanufacture the valves is hazardous, as any flaws could cause a catastrophic failure in the transmitters. BBC Radio 4's 198 kHz transmitting towers at Droitwich. Credit: Bob Nienhuis (Public domain) BBC Radio 4's 198 kHz transmitting towers at Droitwich. Credit: Bob Nienhuis (Public domain) Rebuilding the transmitter, or moving to different, higher frequencies, is not feasible for the very few homes that cannot get other kinds of lower-power radio, or internet versions, the BBC told The Guardian in 2011. What's more, keeping Droitwich powered such that it can reach the whole of the UK, including Wales and lower Scotland, requires some 500 kilowatts of power, more than most other BBC transmission types. As of January 2025, roughly 600,000 UK customers still use RTS meters to manage their power switching, after 300,000 were switched away in 2024. Utilities and the BBC have agreed that the service will stop working on June 30, 2025, and have pushed to upgrade RTS customers to smart meters. In a combination of sad reality and rich irony, more than 4 million smart meters in the UK are not working properly. Some have delivered eye-popping charges to their customers, based on estimated bills instead of real readings, like Sir Grayson Perry's 39,000 pounds due on 15 simultaneous bills. But many have failed because the UK, like other countries, phased out the 2G and 3G networks older meters relied upon without coordinated transition efforts. Kevin Purdy Senior Technology Reporter Kevin Purdy Senior Technology Reporter Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch. 45 Comments
    0 Комментарии 0 Поделились 79 Просмотры