• Quordle hints and answers for Sunday, February 9 (game #1112)
    www.techradar.com
    Looking for Quordle clues? We can help. Plus get the answers to Quordle today and past solutions.
    0 Kommentare ·0 Anteile ·51 Ansichten
  • PSN is down gamers left without their online fix as PlayStation Network outage is still ongoing after 12 hours
    www.techradar.com
    Were seeing red lights across Sonys service status dashboard, and its not looking too good for weekend gaming prospects right now.
    0 Kommentare ·0 Anteile ·51 Ansichten
  • Spigen Accidentally Reveals iPhone SE 4 Design: Bigger Screen, Action Button, and More
    www.yankodesign.com
    I was under the assumption that the mute switch would still find its place on the iPhone SE 4, given that the device also retains the notch display from older times. However, Spigens accidental leak may have confirmed the opposite the iPhone SE 4 will come with an Action Button which is pretty interesting for a budget phone.Spigen just did Apple dirtywhether they meant to or not. The case manufacturer accidentally uploaded images of the iPhone SE 4 wrapped in one of its protective cases, giving us what might be the clearest look yet at Apples next budget-friendly phone. And despite Spigen labeling it as an iPhone SE (3rd Gen), the design tells a completely different story. As much as Spigen tried to backtrack its oopsie, the SE 4 is clearly ditching the iPhone 8-era look and jumping straight to an iPhone 14-inspired design.Designer: SpigenThe leaks align with earlier rumors, showing a 6.1-inch display with a notch up top, a single camera on the back, and what appears to be an Action Button replacing the classic Alert Slider. If that last detail holds up, it means Apple is bringing one of the iPhone 15 Pros standout features to its most affordable model. The Action Button, which lets you assign shortcuts like launching the camera or toggling Focus modes, could be a game-changer for the SE lineup, making it feel a little less like an entry-level device and more like a proper modern iPhone.If history is anything to go by, case makers like Spigen tend to have insider access to device dimensions ahead of launch. Their business depends on getting the fit exactly right, so its safe to assume this design is pretty close to what Apple will unveil. The biggest takeaway? The Home button is officially a relic. With this shift, the SE lineup moves fully into the Face ID era, leaving Touch ID behind.Under the hood, leaks suggest the iPhone SE 4 could pack a beefier processorpotentially the A16 Bionicmaking it one of the most powerful phones in its price range. The jump in display size and processing power means Apple is repositioning the SE as more than just a nostalgia trip for classic iPhone fans. Its shaping up to be a no-frills iPhone for people who dont need all the bells and whistles of a Pro model but still want a device that will last for years.Theres also a speculative detail that these renders dont include the USB-C port. Its worth making the assumption that Apple will outfit the iPhone SE 4 with a USB-C port instead of the lightning connector from past models. Not only does it align with Apples complete switch to USB-C, it also lines up with EUs requirements of ditching proprietary connectors and cables. Although Apple doesnt expect the SE 4 to be a breakthrough device in the EU, Im sure it just wants to comply rather than choosing a hill to die on.As for the launch, Apple isnt holding a flashy event, but trusted insiders suggest a press release and a YouTube video announcement could drop as soon as February 12 or 13, with February 18 as a possible fallback date. Pricing remains a mystery, though its expected to land closer to $499, a slight bump from the current SEs $429 starting price. Given the hardware upgrades, that price increase wouldnt be surprising.The post Spigen Accidentally Reveals iPhone SE 4 Design: Bigger Screen, Action Button, and More first appeared on Yanko Design.
    0 Kommentare ·0 Anteile ·42 Ansichten
  • Sprout-like scanner and projector concept help kids reconnect with nature anywhere
    www.yankodesign.com
    Our world today is filled with so many devices and gadgets that even kids start using them at a young age. While it does help them become more connected to a vast trove of knowledge, they also become more accustomed to technology to the detriment of other parts of the world around them. Nature can sometimes be alien to them and sometimes even boring, especially when they cant take those objects and experiences with them like they can with smartphones and tablets.It doesnt have to be an either-or situation where the embrace of technology is mutually exclusive to enjoying the richness of nature. All it takes is some creative thinking to blend these two worlds together in harmony. This concept for a kid-oriented device, for example, uses the very same technologies that enchant and entertain children to spark their curiosity about nature and continue their learning journey at home.Designers: Hayeon Cho, Jihye Choi, Park Minji, Jiyoung YunNolly is admittedly a difficult design to describe based on appearances alone. It is part projector, part scanner, and part interactive screen, all designed in smooth organic shapes and a pastel green color scheme that makes them look like parts of a plant. The wall-mounted projector, in particular, looks like a growing sprout, especially with its cable looking like a stem attached to a base that acts as its roots.The bell-shaped scanner is called a stamp, though what it really does is collect images of plants, flowers, and small animals that kids see in the wild during their time in the outside world. Coming back home, they place the stamp down in the middle of a disc palette that will read the information from the stamp and display pictures of the foliage the child has taken throughout the day. This gives them the opportunity to revisit those discoveries and maybe even learn more about them.The projector can also be used to display a selected plant on the floor or the wall across it, creating an interactive canvas for the child to enjoy. Using image inpainting technologies, it turns static 2D pictures of plants and flowers into moving graphics that kids can try to touch or step on, causing petals and leaves to burst and scatter around. Although real-world plants dont behave this way, this virtual experience imprints on the childs mind the idea of nature being alive in more ways than one.The post Sprout-like scanner and projector concept help kids reconnect with nature anywhere first appeared on Yanko Design.
    0 Kommentare ·0 Anteile ·49 Ansichten
  • Temu and Shein Raised Prices, Removed Products as Trumps China Tariffs Went Into Effect
    www.wired.com
    Sellers and shoppers on the two sites say they saw items disappear and prices go up after President Donald Trump implemented tariffs on Chinese imports.
    0 Kommentare ·0 Anteile ·22 Ansichten
  • Apple execs spotted in New Orleans ahead of Super Bowl
    appleinsider.com
    If you urgently need to get ahold of Apple CEO Tim Cook or one of the other senior Apple executives this weekend, you'll need to head to New Orleans and the Caesars Superdome.Logo for the 2025 Super BowlSuper Bowl LIX will be played in New Orleans on Sunday, February 9, and several of Apple's top brass have been spotted in the city ahead of the big event. CEO Tim Cook, retail chief Deirdre O'Brien, and Apple Services head Eddy Cue are among the execs seen so far.Cook and O'Brien also dropped by the Apple Lakeside Shopping Center in Metairie, surprising employees there. Cue, Cook, and O'Brien were sighted at Domilise's Po-Boys & Bar on Annunciation Street in the downtown area. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare ·0 Anteile ·24 Ansichten
  • Begin with problems, sandbox, identify trustworth vendors a quick guide to getting started with AI
    venturebeat.com
    Focus first on problem-solving, followed closely by testing and pilot programs, data security and identifying tangible value of AI.Read More
    0 Kommentare ·0 Anteile ·23 Ansichten
  • Fine-Tuning of Llama-2 7B Chat for Python Code Generation: Using QLoRA, SFTTrainer, and Gradient Checkpointing on the Alpaca-14k Dataset
    www.marktechpost.com
    In this tutorial, we demonstrate how to efficiently fine-tune the Llama-2 7B Chat model for Python code generation using advanced techniques such as QLoRA, gradient checkpointing, and supervised fine-tuning with the SFTTrainer. Leveraging the Alpaca-14k dataset, we walk through setting up the environment, configuring LoRA parameters, and applying memory optimization strategies to train a model that excels in generating high-quality Python code. This step-by-step guide is designed for practitioners seeking to harness the power of LLMs with minimal computational overhead.Copy CodeCopiedUse a different Browser!pip install -q accelerate!pip install -q peft!pip install -q transformers!pip install -q trlFirst, install the required libraries for our project. They include accelerate, peft, transformers, and trl from the Python Package Index. The -q flag (quiet mode) keeps the output minimal.Copy CodeCopiedUse a different Browserimport osfrom datasets import load_datasetfrom transformers import ( AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, TrainingArguments, pipeline, logging,)from peft import LoraConfig, PeftModelfrom trl import SFTTrainerImport the essential modules for our training setup. They include utilities for dataset loading, model/tokenizer, training arguments, logging, LoRA configuration, and the SFTTrainer.Copy CodeCopiedUse a different Browser# The model to train from the Hugging Face hubmodel_name = "NousResearch/llama-2-7b-chat-hf"# The instruction dataset to usedataset_name = "user/minipython-Alpaca-14k"# Fine-tuned model namenew_model = "/kaggle/working/llama-2-7b-codeAlpaca"We specify the base model from the Hugging Face hub, the instruction dataset, and the new models name.Copy CodeCopiedUse a different Browser# QLoRA parameters# LoRA attention dimensionlora_r = 64# Alpha parameter for LoRA scalinglora_alpha = 16# Dropout probability for LoRA layerslora_dropout = 0.1Define the LoRA parameters for our model. `lora_r` sets the LoRA attention dimension, `lora_alpha` scales LoRA updates, and `lora_dropout` controls dropout probability.Copy CodeCopiedUse a different Browser# TrainingArguments parameters# Output directory where the model predictions and checkpoints will be storedoutput_dir = "/kaggle/working/llama-2-7b-codeAlpaca"# Number of training epochsnum_train_epochs = 1# Enable fp16 training (set to True for mixed precision training)fp16 = True# Batch size per GPU for trainingper_device_train_batch_size = 8# Batch size per GPU for evaluationper_device_eval_batch_size = 8# Number of update steps to accumulate the gradients forgradient_accumulation_steps = 2# Enable gradient checkpointinggradient_checkpointing = True# Maximum gradient norm (gradient clipping)max_grad_norm = 0.3# Initial learning rate (AdamW optimizer)learning_rate = 2e-4# Weight decay to apply to all layers except bias/LayerNorm weightsweight_decay = 0.001# Optimizer to useoptim = "adamw_torch"# Learning rate schedulelr_scheduler_type = "constant"# Group sequences into batches with the same length# Saves memory and speeds up training considerablygroup_by_length = True# Ratio of steps for a linear warmupwarmup_ratio = 0.03# Save checkpoint every X updates stepssave_steps = 100# Log every X updates stepslogging_steps = 10These parameters configure the training process. They include output paths, number of epochs, precision (fp16), batch sizes, gradient accumulation, and checkpointing. Additional settings like learning rate, optimizer, and scheduling help fine-tune training behavior. Warmup and logging settings control how the model starts training and how we monitor progress.Copy CodeCopiedUse a different Browserimport torchprint("PyTorch Version:", torch.__version__)print("CUDA Version:", torch.version.cuda)Import PyTorch and print both the installed PyTorch version and the corresponding CUDA version.Copy CodeCopiedUse a different Browser!nvidia-smiThis command shows the GPU information, including driver version, CUDA version, and current GPU usage.Copy CodeCopiedUse a different Browser# SFT parameters# Maximum sequence length to usemax_seq_length = None# Pack multiple short examples in the same input sequence to increase efficiencypacking = False# Load the entire model on the GPU 0device_map = {"": 0}Define SFT parameters, such as the maximum sequence length, whether to pack multiple examples, and mapping the entire model to GPU 0.Copy CodeCopiedUse a different Browser# SFT parameters# Maximum sequence length to usemax_seq_length = None# Pack multiple short examples in the same input sequence to increase efficiencypacking = False# Load datasetdataset = load_dataset(dataset_name, split="train")# Load tokenizertokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)tokenizer.pad_token = tokenizer.eos_tokentokenizer.padding_side = "right"# Load base model with 8-bit quantizationmodel = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map="auto",)# Prepare model for trainingmodel.gradient_checkpointing_enable()model.enable_input_require_grads()Set additional SFT parameters and load our dataset and tokenizer. We configure padding tokens for the tokenizer and load the base model with 8-bit quantization. Finally, we enable gradient checkpointing and ensure the model requires input gradients for training.Copy CodeCopiedUse a different Browserfrom peft import get_peft_modelImport the `get_peft_model` function, which applies parameter-efficient fine-tuning (PEFT) to our base model.Copy CodeCopiedUse a different Browser# Load LoRA configurationpeft_config = LoraConfig( lora_alpha=lora_alpha, lora_dropout=lora_dropout, r=lora_r, bias="none", task_type="CAUSAL_LM",)# Apply LoRA to the modelmodel = get_peft_model(model, peft_config)# Set training parameterstraining_arguments = TrainingArguments( output_dir=output_dir, num_train_epochs=num_train_epochs, per_device_train_batch_size=per_device_train_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, optim=optim, save_steps=save_steps, logging_steps=logging_steps, learning_rate=learning_rate, weight_decay=weight_decay, fp16=fp16, max_grad_norm=max_grad_norm, warmup_ratio=warmup_ratio, group_by_length=True, lr_scheduler_type=lr_scheduler_type,)# Set supervised fine-tuning parameterstrainer = SFTTrainer( model=model, train_dataset=dataset, dataset_text_field="text", max_seq_length=max_seq_length, tokenizer=tokenizer, args=training_arguments, packing=packing,)Configure and apply LoRA to our model using `LoraConfig` and `get_peft_model`. We then create `TrainingArguments` for model training, specifying epoch counts, batch sizes, and optimization settings. Lastly, we set up the `SFTTrainer`, passing it the model, dataset, tokenizer, and training arguments.Copy CodeCopiedUse a different Browser# Train modeltrainer.train()# Save trained modeltrainer.model.save_pretrained(new_model)Initiate the supervised fine-tuning process (`trainer.train()`) and then save the trained LoRA model to the specified directory.Copy CodeCopiedUse a different Browser# Run text generation pipeline with the fine-tuned modelprompt = "How can I write a Python program that calculates the mean, standard deviation, and coefficient of variation of a dataset from a CSV file?"pipe = pipeline(task="text-generation", model=trainer.model, tokenizer=tokenizer, max_length=400)result = pipe(f"<s>[INST] {prompt} [/INST]")print(result[0]['generated_text'])Create a text generation pipeline using our fine-tuned model and tokenizer. Then, we provide a prompt, generate text using the pipeline, and print the output.Copy CodeCopiedUse a different Browserfrom kaggle_secrets import UserSecretsClientuser_secrets = UserSecretsClient()secret_value_0 = user_secrets.get_secret("HF_TOKEN")Access Kaggle Secrets to retrieve a stored Hugging Face token (`HF_TOKEN`). This token is used for authentication with the Hugging Face Hub.Copy CodeCopiedUse a different Browser# Empty VRAM# del model# del pipe# del trainer# del datasetdel tokenizerimport gcgc.collect()gc.collect()torch.cuda.empty_cache()The above snippet shows how to free up GPU memory by deleting references and clearing caches. We delete the tokenizer, run garbage collection, and empty the CUDA cache to reduce VRAM usage.Copy CodeCopiedUse a different Browserimport torch# Check the number of GPUs availablenum_gpus = torch.cuda.device_count()print(f"Number of GPUs available: {num_gpus}")# Check if CUDA device 1 is availableif num_gpus > 1: print("cuda:1 is available.")else: print("cuda:1 is not available.")We import PyTorch and check the number of GPUs detected. Then, we print the count and conditionally report whether the GPU with ID 1 is available.Copy CodeCopiedUse a different Browserimport torchfrom transformers import AutoModelForCausalLM, AutoTokenizerfrom peft import PeftModel# Specify the device ID for your desired GPU (e.g., 0 for the first GPU, 1 for the second GPU)device_id = 1 # Change this based on your available GPUsdevice = f"cuda:{device_id}"# Load the base model on the specified GPUbase_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map="auto", # Use auto to load on the available device)# Load the LoRA weightslora_model = PeftModel.from_pretrained(base_model, new_model)# Move LoRA model to the specified GPUlora_model.to(device)# Merge the LoRA weights with the base model weightsmodel = lora_model.merge_and_unload()# Ensure the merged model is on the correct devicemodel.to(device)# Load tokenizertokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)tokenizer.pad_token = tokenizer.eos_tokentokenizer.padding_side = "right"Select a GPU device (device_id 1) and load the base model with specified precision and memory optimizations. Then, load and merge LoRA weights into the base model, ensuring the merged model is moved to the designated GPU. Finally, load the tokenizer and configure it with appropriate padding settings.In conclusion, following this tutorial, you have successfully fine-tuned the Llama-2 7B Chat model to specialize in Python code generation. Integrating QLoRA, gradient checkpointing, and SFTTrainer demonstrates a practical approach to managing resource constraints while achieving high performance.Download the Colab Notebook here.All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our75k+ ML SubReddit. Marktechpost is inviting AI Companies/Startups/Groups to partner for its upcoming AI Magazines on Open Source AI in Production and Agentic AI.The post Fine-Tuning of Llama-2 7B Chat for Python Code Generation: Using QLoRA, SFTTrainer, and Gradient Checkpointing on the Alpaca-14k Dataset appeared first on MarkTechPost.
    0 Kommentare ·0 Anteile ·20 Ansichten
  • Daily Deals: Super Mario Party Jamboree, Nvidia Shield, Apple AirPods 4, and More
    www.ign.com
    The weekend is officially here, and we've rounded up the best deals you can find! Discover the best deals for Saturday, February 8, below:Donkey Kong Country Returns HD for $44.99Donkey Kong Country Returns HDDonkey Kong Country Returns HD is the latest first party game for Nintendo Switch, and you can already save $15 off at Woot. If you enjoyed Donkey Kong Country: Tropical Freeze, Returns HD is another excellent entry you are sure to have fun with. There are many different worlds to traverse through, with a variety of collectibles and items to discover. Apple AirPods 4 for $99.99Apple AirPods 4 Wireless EarbudsAmazon has the Apple AirPods 4 on sale for $99.99 today. These earbuds feature Spatial Audio, up to five hours of listening time per charge, and so much more. Apple AirPods 5 likely won't be out for a good bit, so now is the perfect time to pick up a pair of new AirPods if your old ones are giving out.Nvidia Shield 4K Android TV for $179.99NVIDIA SHIELD 4K Android TVAmazon has the Nvidia Shield 4k Android TV on sale for $129.99 today. This is still one of the best media streamers out there, especially with all its features. In our 8/10 review, we wrote, "For its speedy performance, AI upscaling, well-designed remote, and lower price, the Nvidia Shield TV is one of the best media streamers you can buy. Its ability to stream games makes it even more attractive for gamers with GeForce-based rigs."Final Fantasy Pixel Remaster Collection for $39.99FINAL FANTASY I-VI Collection Anniversary Edition (2024)The Final Fantasy Pixel Remaster Collection has hit a new all-time low at Woot, priced at just $39.99. The first six Final Fantasy titles paved the way for the series as we see it today. Many fans still regard both Final Fantasy IV and Final Fantasy VI as some of the best that Final Fantasy has to offer, with gripping narratives and engaging gameplay. This package includes all six Final Fantasy Pixel Remasters, which feature updated graphics, soundtracks, font, and more.Super Monkey Ball Banana Rumble for $19.99Super Monkey Ball Banana Rumble Launch Edition - Nintendo SwitchSuper Monkey Ball Banana Rumble is the return to form many Monkey Ball fans have waited years for. You've got over 200 courses, tons of guest characters, and all sorts of modeswhat's not to love? In our 8/10 review, we wrote, "Super Monkey Ball Banana Rumble is a brilliant return to form. Monkey Ball has finally found its way home again with a set of 200 fantastic courses that range from delightfully charming to devilishly challenging, backed up by tight mechanics and predictable physics that put me in total control of my monkeys fate."Super Mario Party Jamboree for $44.99Super Mario Party Jamboree - Nintendo SwitchMario Party is the quintessential party game for players of all ages. Whenever you've got a gathering of people, there's never a question if Mario Party is going to be a fun activity or not. Super Mario Party Jamboree, the latest entry in the genre, is the best the series has been on Nintendo Switch, with seven different boards and over 100 minigames to explore. At $44.99, this is a great deal on one of the best games for Nintendo Switch. Sonic X Shadow Generations for $29.99Sonic X Shadow GenerationsSonic X Shadow Generations for PS5 is $20 off at Woot right now. This package includes a remastered version of Sonic Generations and a brand-new campaign focused on Shadow. Both 2D and 3D levels are included, making for the ultimate package for any Sonic fan.
    0 Kommentare ·0 Anteile ·22 Ansichten
  • Indie App Spotlight: Bento|Craft is a nifty design tool for easily making bento graphics
    9to5mac.com
    Welcome toIndie App Spotlight. This is a weekly 9to5Mac series where we showcase the latest apps in the indie app world. If youre a developer and would like your app featured, getin contact.Bento|Craft is a great tool for easily making Apple-style bento box graphics in a matter of seconds. It provides dozens of templates and mockups, allows you to customize layouts, and export very quickly. Its a high quality and simple-to-use design tool.HighlightsAs mentioned earlier, Bento|Craft comes with a number of templates and mockups, making it easier than ever to get started. Theres 6 templates to choose from, giving you a number of design choices. You can also easily change your device mockups as you see fit.For the most part its a drag and drop tool. You can easily pull images in, customize text, and add other symbols and icons. Exports are also pretty quick, allowing you to promptly share your designs to social media.It also offers a native visionOS app with a great native interface. Its an awesome tool for making quick graphics for your app (or any other project) that can be shared in a blink.Bento|Craft is available for free on the App Store for iPads running iPadOS 17 and later. Its also available on Macs (as an iPad app) for Macs running macOS Sonoma or later, and its natively available on Apple Vision Pro.For the full experience with unlimited exports, youll need to pay $1.99/week, $4.99/month, or $59.99 for the lifetime pass. The app is pretty capable for free, though.My favorite iPhone accessories on Amazon:Follow Michael:X/Twitter,Bluesky,InstagramAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Kommentare ·0 Anteile ·20 Ansichten