• TOWARDSAI.NET
    Llm Fine Tuning Guide: Do You Need It and How to Do It
    Author(s): Igor Novikov Originally published on Towards AI. Working with LLMs, one of the most popular questions we get is about fine-tuning. Every second client asks if they should do additional training on their model.In most cases the answer is no, they dont need it. Modern LLMs are good enough without fine-tuning for many commercial applications, like a bot that helps clients order flowers from a flower shop. Besides they dont have data to do that, and no, 20 samples of dialogues they have do not count (and 200 too).Training and finetuning models is an expensive ordeal, and you really should avoid it if you can and spend the money saved on a trip to Aruba, or whatever vacation place you fancy.Image by the authorBut, there are cases when you do need it. For example, if you want LLM to follow a very specific chat format or have knowledge in a very specific domain or you want to cut costs by training a small model to do a very specialized task, instead of using large LLM with hundreds of billions of parameters. These are all valid cases for creating a tailored mode through fine-tuning.So lets look at the ways to do just that.When to fine-tuneAs said above, you only should fine-tune if you have to. Try to solve the task with prompt engineering first or build a RAG system. If that fails consider fine-tuning.Finetuning has the following disadvantages:It costs money and takes timeYou will need good training data or it will not workIt can lead to more frequent hallucinations even if done properly, as we are adding new behavior to a model that was not initially tailored for that. In case you make recurrent updates to the model, at some point it is almost guaranteed and is called drift, so you will have to evaluate your mode for that.Once you consider all the above, and still think a general LLM is not good enough you need to fine-tune.DataTo fine-tune you will need data in a specific format, called instruction dataset.Where to get dataThere are a lot of open datasets that you can use, for example, the Anthropic HH-RLHF dataset for model alignment, MIMIC-III for healthcare, and CodeSearchNet for coding. There are:Domain-specific datasets: medicine, law, coding, and so onTasks-specific datasets are useful to train the model to do one specific task and make RPAsGeneral-purpose datasets with generic knowledge, usually created from data crawled from the internetAlignment datasets: used for format, style, and safety alignmentThe Hugging Face Hub has lots of instruction datasets you can use for different domains, I suggest starting there.But since you decided to fine-tune you likely have your data, so you will need to create your dataset. Otherwise, why do you do that?If you dont have enough samples, you can generate synthetic data using large LLMs like ChatGTP by extrapolating from the data you have. Ill talk about it later.Data requirementThe dataset size depends on model size, task complexity, and training method. Companies like OpenAI are using humongous datasets with millions of items, which is not feasible for most companies due to cost so realistically we are going to have several thousands of samples.For simple changes like communication style alignment you dont need a lot of samples, several hundred will do, for domain-specific knowledge training you will need several thousand to hundreds of thousands, depending on the domain. In general, more is better, and it is better to have at least several thousand samples.Quality of data means not less, probably even more than quantity. You need to make sure the data reflects correctly the behaviors you want to model to learn, in both meaning AND format. I want to stress the format you want the model to output information in a way your users can understand, in terms of clarity and style. There is no use in a model that tells the truth in rap verses unless you want to create an Eminem twin.Data preparationData preparation is a critical step, as the quality of your data directly impacts the performance and accuracy of your model. Preparing your data involves several processes to ensure it is clean, relevant, and suitable for training:1. DeduplicationDuplicated data points can inflate training costs, introduce unnecessary noise, and lead to overfitting or biases in your model. Here are common approaches:Text Normalization:Convert text to lowercase.Remove special characters, extra spaces, and punctuation to standardize the content.Hash-Based Deduplication:Generate a hash of the normalized text. A commonly used technique is MinHash, which captures the essence or semantic fingerprint of an item rather than its exact text. This allows for identifying duplicates even if their format or small details differ. You can use libraries like datasketch to do thatCompare hashes and remove matching entriesVector-Based Deduplication:Convert items into vector representations (embeddings) to measure their semantic similarity.Use a vector database like Quadrant, Pinecone, or Weaviate to efficiently find similar items.Apply a cross-encoder on top of retrieved items to compute their similarity scores more accurately. This step helps you confidently identify and eliminate near-duplicates.2. Personal Information RemovalYou need to de-identify the data because you dont want the model to learn (and then tell everybody) the personal information of people (unless thats what you want). This can have serious legal and ethical implications, especially with regulations like GDPR. Besides, usually, personal data is not relevant to the domain knowledge.De-identification:Use Regex patterns for detecting common formats (e.g., emails or phone numbers).Leverage pre-trained NLP models designed for named entity recognition (NER) to identify and redact personal data.Domain-Specific Filtering:You may create your filters based on the context of your data. For example, medical data may require removing health-related identifiers as defined by HIPAA.3. DecontaminationYour dataset might contain content that can negatively affect model behavior:Malicious Content:Detect and filter out embedded commands targeting large language models (e.g., prompt injections), scripts, XSS, SQL injection code, etc.Automated scanning tools or specialized LLM-based classifiers can assist in identifying such patterns.Inappropriate Language:Filter curse words, slurs, offensive content, slang.4. Rule-Based FilteringNot all data in your dataset will be relevant to your domain or task. Rule-based filtering helps eliminate irrelevant or harmful content:Define exclusion criteria based on the task. For instance, if you are training a financial model, exclude non-financial data.Use keyword searches, phrases, or topic modeling to identify irrelevant content.I suggest using a hybrid approach:Use simple tools first:Regex or keyword-based search for patterns, like identifying email addresses or phone numbers.On the remaining items useLLM as a judge to evaluate the relevance or quality of data. For example, ask an LLM to label whether an item is appropriate for the training task.Use specialized ML models for complex cleaning tasks, such as detecting and filtering out toxic language. There are a bunch of pre-trained models on HuggingFace for that.Data EvaluationAfter all these steps I suggest having a separate pipeline to check the data quality. This can be done by humans, and if you have only several hundreds of samples you can do that. But if you have thousands, that is unlikely. So, again, you can use LLM as a judge approach or use a simpler classifier model for automated assessment. See, for example, HuggingFaceFW/fineweb-edu-classifier.For LLM you can use a prompt like:You are a data quality evaluator. Your goal is to assess the quality of an instruction and its corresponding answer. Determine how effectively the answer addresses the given task in a clear, accurate, and complete manner.Evaluation Criteria:Relevance: Does the answer directly address the instruction?Clarity: Is the answer clear and easy to understand?Completeness: Does the answer provide all the necessary information to fulfill the instruction?Accuracy: Is the information in the answer factually correct?Instructions:Carefully read the provided instructions and answer.Provide a score (15) for each of the evaluation criteria above.1 = Very poor5 = Excellent3. Justify your score with specific examples or observations for each criterion.Example for Evaluation:Instruction: Explain the water cycle.Answer: The water cycle involves evaporation, condensation, and precipitation, moving water between the Earth's surface and atmosphere.Your Evaluation:<Relevance>: 5 - The answer directly explains the water cycle.<Clarity>: 4 - The answer is clear but could elaborate on each step.<Completeness>: 3 - Missing details on processes like runoff or groundwater flow.<Accuracy>: 5 - The provided information is correct.Now, evaluate the following instruction-answer pair:Instruction: [Insert instruction here]Answer: [Insert answer here]What the acceptable threshold here is up to you, generally I would start with 8090%.Also be aware of which LLM you use for that and the fact that LLMs have certain biases (almost like humans):They prefer verbose, long and argument answers to concise ones, even if the shorter answer is more correctItems that are first on the list are often preferred by the model over others. This is also known as Baby Duck Syndrom. Thats important if you are creating preference datasets (more on that later).Model bias LLMs from the same family are likely to prefer data generated by the model of the same family. Thats important if you are going to generate syntectic data for training.DataSet FormatsThere are several popular formats, they are all kinda small and use JSON, so you can use any of them.OpenAI formatOpenAIs fine-tuning process utilizes a JSONL (JSON Lines) format, where each line represents a distinct training example.{ "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Can you explain the concept of photosynthesis?"}, {"role": "assistant", "content": "Photosynthesis is the process by which green plants convert sunlight into chemical energy."} ]}Alpaca Dataset FormatDeveloped by Stanfords Center for Research on Foundation Models. Each entry in this dataset is structured as follows:{ "instruction": "Describe the structure of an atom.", "input": "", "output": "An atom consists of a nucleus containing protons and neutrons, with electrons orbiting this nucleus."}ShareGPTThe ShareGPT dataset format is designed to capture multi-turn conversations between users and AI assistants, accommodating various roles such as human, gpt, observation, and function. This structure enables the representation of complex dialogues, including tool interactions and function calls.Each conversation is represented as a JSON object with the following components:{ "conversations": [ {"from": "human", "value": "What is the capital of France?"}, {"from": "gpt", "value": "The capital of France is Paris."}, {"from": "human", "value": "Show me a map of Paris."}, {"from": "function_call", "value": "map_search('Paris')"}, {"from": "observation", "value": "<image of Paris map>"}, {"from": "gpt", "value": "Here is a map of Paris."} ], "system": "You are a helpful assistant.", "tools": "map_search"}There are also OASST and others, you got the idea.Fine-Tuning techniquesNow that you have your training data, lets look at what we can do with it. The main techniques are:Full re-trainingLoraQLoRADirect preference optimization (DPO)Full re-trainingThis is the process of training an entire model (all layers) on a specific dataset to optimize it for a particular task or domain. Most effective, in theory, but requires significant computing power to do, as it requires backpropagation through the entire model.Since we are messing up with model weight directly, it comes with certain risks:Risk of Overfitting: since all weights are updated, theres a higher risk of overfitting to the fine-tuning dataset, especially if the dataset is small.Loss of Generality: fine-tuned models may lose their general-purpose capabilities and previous knowledgeSo how much memory do we need to do full re-train? We need to load at least the following for training:Model Prams + Gradients + Activations + Optimizer StatesModel Parameter and Gradients:7B model: Approximately 7 billion parameters,12B model: Approximately 12 billion parameters, 12 *10*4 = 48GBEach parameter typically requires 4 bytes (FP32 precision) or 2 bytes (FP16 precision). Lets assume 2 bytes, soFor 7B model 7*10 * 2 = 14GBFor 12B model 12*10 * 2 = 24GGradients add another 2 bytes per param, so additionally:For 7B model 7*10 * 2 = 14GBFor 12B model 12*10 * 2 = 24G2. Activations:Larger batch sizes as well as higher sequence lengths increase memory requirements. For a typical batch size of 832 and sequence length of 512 tokens, activation memory might add:7B model: 1020 GB.12B model: 1530 GB.3. Optimizer States:Optimizers like Adam require memory for additional states (e.g., gradients and moment estimates). Adam requires two additional parameters, with 3 states each so:7B model: 14GB * 2 * 3 = 42GB12B model: 24GB * 2 * 3 = 72GBThere are going to be some additional things that will consume memory, so we are looking at a minimum of 14 + 14 + 10 + 42 = 80GB for 7B model.That is a lot of memory for a small model, you can imagine how much you will need for anything big. So full retraining is not practical and rarely used. So what are the alternatives?LoRaImage by the authorSuppose you want to change the models behavior, but dont want to change the whole model. Changing model behavior means changing its weights so it changes the outputs. Heres the trick if only we could somehow modify model outputs without changing their weightsAnd there is a way of course. In a brute-force solution, we can technically feed the model outputs into another model, to transform them. It would work only, we have two models now and a lot of added complexity.But what if we can add a filter on top of the model, that will keep the original model layers intact and change their outputs? Its kinda putting on AR glasses. You see the world differently, but the world hasnt changed.Thats basically what LORA is. We freeze the original model weights and apply a transformation by adding an additional weight matrix called the Lora matrix, so it forms an additional trainable layer of a much smaller size.Where:Wnew new weightsWpre-trained original model weighsW the trainable weight adjustmentHow do we calculate this Lora matrix? We do the finetuning/training on that additional matrix instead of the original model, using standard methods so it learns how to predict the difference between the desired results and the original model results.And the beauty is that the Lora matrix can be way smaller than the original model weight matrix. Thats why it is called Low-Rank Adaptation, the matrix is a lower rank than the original.Say you have a weight matrix of size d:It will have d*d elements. If d is one million, it will have one trillion elements.Now here is LoRas matrix:It will have d*r + r*d elements. If d is one million and rank (r) is 8, it will have 16 million elements.Here is how it works:y = x * (W + W) = x * W + x*(A*B)y: The output after applying weights.x: The input to the layerW=A * BWhere:A: a matrix of shape d*r, where r is the rank (small dimensionality chosen for LoRA fine-tuning) and d is the same dimensionality as the original weights matrixB: a matrix of shape r*dOr in visual form:A common starting point for rank is 8. Values up to 256 have been used with good results in certain cases but you will need to experiment to see what works for you. Using larger ranks can improve performance in some tasks, particularly those requiring more expressive power to capture complex patterns. However, this also increases the risk of overfitting, especially on smaller datasets. This risk is well-known in machine learning when model capacity exceeds the complexity of the data.During training, we need to store in memory the weights W of the original model and W of the fine-tuned model, while computing gradients only for the new small matrices A and B. That provides a significant reduction in required memory and computing power. The training will be much faster and 7b models can be easily finetuned on a PC with a desktop GPU.More than that, we can have several different lenses like this one, that we can put on the base model, without the need to change it.LoRA fine-tuning often achieves performance comparable to full fine-tuning, particularly when the low-rank approximation is well-suited to the task and LoRA adapters can be tested or applied without risking degradation of the base model.QLoRASame as LoRa but to lower the memory footprint we quantize the base model to a custom data type, typically to NF4 (Normal Float 4-bit). Regular models use 32-bit or 16-bit floating point as a base data type for storing weights.NF4 enables QLoRA to retain most of the accuracy of the base model while significantly reducing memory usage and computational demands.The idea of quantization is that:Most weights in the network are 0 anywayNF4 optimizes the distribution of values based on the actual data statistics rather than using a linear distribution of floating-point valuesFor the LoRa pass, we will still use regular models using 32-bit or 16-bit floating point though to have more range for learning.Using QLoRa can reduce GPU memory usage by 4070%. However, it comes with a cost QLoRA is approximately 30% slower than LoRA in training and slightly degrades the quantized model quality.It works well even with very large models (e.g., LLaMA or GPT-based architectures).Fine-tuning with (Human) Preference AlignmentFine-tuning works well for training a model to do specific tasks, but it is important not only important what the model does but also to how it interacts with humans. If we want to create a language model assistant, we cannot use a pre-trained model as it is it will not be able to intelligently answer user queries, even though it has the required knowledge.Teaching the model to communicate to humans is called alignment. There are different ways to define what it is, Ill use Antropics definition of 3H:Helpful The response should address the users problem.Harmless The response should not cause harm to the user.Honest The response should be factually accurateTraditional methods do not help much here, so a new set of techniques was developed.The idea of any such technique is to have a dataset similar to what we discussed above, where additionally human preferences or values are clearly indicated. This could include feedback on text quality, tone, style, or factual correctness. Usually, the dataset items have more than one option of response, each ranked by preference.I bet you have seen ChatGPT giving you multiple options to pick when generating answers they are doing that to collect a similar dataset. Oftentimes question-answer websites have likes or upvotes/downvotes systems that can be also used as training data. If you crawl data from the internet it is important to do the cleaning afterward, the dataset can contain lots of junk.For example:User: Im feeling overwhelmed with work and life right now. Its hard to keep going.Response Options:Option A: Im sorry youre feeling this way. Have you thought about talking to someone you trust or a professional counselor?.Option B: What kind of man are you, complaining like that? Just drink some vodka youll be fine.Human-Provided Preference:Preferred Response: Option A (Ranked highest for empathy and clarity).Ranking: Option A > Option B.Rationale:Option A shows empathy, acknowledges the users feelings, and provides actionable advice.Option B dismisses the users feelings and offers no constructive help.Or in JSON format:{ "context": "I'm feeling overwhelmed with work and life right now. It's hard to keep going.", "responses": [ { "text": "I'm sorry you're feeling this way. Have you thought about talking to someone you trust or a professional counselor? It might help to share your feelings.", "rank": 1 }, { "text": "What kind of man are you, complaining like that? Just drink some vodka - youll be fine.", "rank": 2 } ]}Once you have that data, you can use the techniques below:Reinforcement Learning with Human Feedback (RLHF)This is a cornerstone of preference alignment. This idea is very similar to training dogs whereby you reward the dog for doing the right things and punish for doing wrong over many iterations. You play a reward model role in this case and a dog plays a base model role.So there is a separate reward model that is trained to predict human preferences using pairwise comparisons (e.g., Response A is better than Response B). Basically, we train a reward model that predicts rankings for responses.It is done so we dont have to use humans after we have a reward model it serves as a proxy for human feedback in further training processes.The main model is then further fine-tuned using reinforcement learning, where the reward signal comes from the trained reward model using reinforced learning, usually over multiple iterations. The base model does not acquire new knowledge in this process but instead learns to use and communicate the knowledge it already has. Studies have shown that using a small, high-quality dataset is much better than using large datasets of bad quality (see LIMA study: Less Is More for Alignment).This approach allows for complex reward signals from the reward model that include correctness, relevance, safety, and all sorts of political censorship bullshit too. It also allows us to use our reward model to train multiple base models for preference alignment.The downsides are obvious as well. Now we have to train two models instead of one and then do multiple iterations on finetuning the base model. Thats computationally expensive, complex, and takes time.Additionally, there is a risk of overfitting your reward model and degrading base model performance.So to avoid complications another approach was proposed:Direct Preference Optimization (DPO)This is probably the closest you can get to having your cake and eating it too.It was introduced in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model, authored by Rafael Rafailov and a bunch of other people. They had a genius idea: what if we skip the intermediate reward model outputs and directly align the model with human preferences using standard supervised learning?So the difference here is that we dont have a separate reward model and dont use reinforcement learning but instead update the base model directly with standard supervised learning methods. If you wonder what the difference is you can read here.Supervised learning typically uses gradient-based optimization (e.g., Stochastic Gradient Descent) to adjust the base model weights directly based on the labeled data. DPO is much better in terms of time and costs than RLFH as it doesnt require many iterations and a separate model, but in many cases provides similar performance and alignment of the base model, albeit under certain conditions.This approach requires granular data of good quality, it is more sensitive to quality than RLHF. Preference data in the dataset has to be sufficient and straightforward. If you have dataset like that or is able to create one DPO is probably the best way to go.What to use for fine-tuning experiments and hostingYou can, of course selfhost and train/deploy locally if you have the hardware to do that. Your setup will depend on what kind of hardware, model, and virtualization you are using so I wont go into that.OrchestrationIn general I suggges to models deployment using orchestrator like ZenML so you can switch infrastructure providers and you want and avoid vendor lock. Than you can strt with free tier with one provider for building a prototype and switch to a scalable cloud version or on-prem if you need to.For experiments, I suggest sticking with free tiers of cloud platforms, specifically:Fine-tuning infrastructureAWS SageMaker: A fully managed service for building, training, and deploying machine learning models on AWS. Very convenient so you dont have to build your own infrastructure and buy GPUs. Thy have free tier to start experimenting.Alternatives:Google Vertex AIAzure Machine LearningDatabricks MLMLflow this one is open source and can be self-hostedModels hostingFor experiments and collaboration the best option is HuggingFace collaborative platform for sharing and discovering machine learning models, datasets, and demos. Its like github for models. They also have free tier.Alternatives: I dont think there is a good alternative, thats why they are soil popular. All major players (Google, Azure AI Playground) have something similar but not as good.For production, you can useAWS SageMakerGoogle Vertex AIMicrosoft Azure Machine LearningMLflow (can be deployed on-prem)Have fun!Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Yorumlar 0 hisse senetleri 16 Views
  • WWW.IGN.COM
    New Report Says $250 Million Arcane Was a 'Financial Miss,' Riot Co-Founder Insists It 'Crushed for Players and So It Crushed for Us'
    The co-founder of Riot Games has responded to a report that claimed League of Legends animated series Arcane was a financial miss.Bloomberg reported that Arcane's two seasons cost an eye-watering $250 million to produce and market, and ultimately failed to generate enough gaming revenue for Riot despite winning a big audience on Netflix. The publication said Netflix paid $3 million per episode, with Riot owner Tencent handing over an additional $3 million per episode to show Arcane in China. All told, thats less than half the $250 million it cost Riot to bring Arcane to market. And, according to Bloomberg, Tencent started asking Riot difficult questions between the release of Season 1 and 2.The hope, Bloomberg reported, was that Arcane would fuel an increase in players of League of Legends and in turn a boost in spending. Riot makes significant revenue from the sale of skins for League of Legends characters, some of which cost hundreds of dollars. Bloomberg said that Riot failed to capitalize on the success of Season 1 with Arcane-themed items, but had more time to do so ahead of the release of Season 2.In a quote attributed to a spokesman, Riot insisted that while Arcane wasnt profitable, the show should be considered a success overall, with the last month one of the companys highest grossing revenue periods ever. Apparently the second season is on track to at least break even financially.Now, Riot co-founder Mark Merrill has responded to the report, taking to Reddit to address discussion about it within the League of Legends community.People who look at the world through a short term, transactional, cynical lens, really struggle to understand Riot, Merrill said. This has been true with various people trying to claim that high quality free games won't work, that esports will never work, that our music was insane, are now saying that Arcane wasn't awesome and worth it.These people think we make things like Arcane to sell skins, when in reality we sell skins to make things like Arcane. Riot is a mission driven company where Rioters are constantly striving to make it better to be a player. That is why we have successfully done that over and over again across multiple games and now multiple businesses / mediums - games, sports, music & animation. Do we get everything right? Nope. But we are not focused on the short term extraction of profits - we are focused on delivering exceptional value to our audience over the long term, again and again and again.To be clear, Arcane crushed for players and so it crushed for us.Merrill, clearly, is insisting that for Riot the costly Arcane was worth it, although its worth noting that he does not dispute any specific part of Bloombergs reporting. Merrill subsequently responded to one Reddit user who suggested Arcane wasnt profitable enough for Riot to make more League of Legends animated spin-offs, saying: "Except it was."Fans are hoping that Riot pushes forward with more League of Legends animated series despite all this. Last month, Riot creative director and Arcane creator and showrunner Christian Linke revealed the three Runeterra regions it's exploring as settings for future shows: Noxus, Ionia, and Demacia.Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.
    0 Yorumlar 0 hisse senetleri 16 Views
  • WWW.IGN.COM
    James Gunns DCU Kickstarter Creature Commandos Gets Season 2
    Max has renewed DC animated series Creature Commandos for a second season.Creature Commandos kickstarted James Gunn and Peter Safrans rebooted DC Universe when Season 1 of the adult animated series launched earlier this month. Its written and executive produced by Gunn himself, with the Season 1 finale out January 9.Creature Commandos follows a secret team of incarcerated monsters recruited for missions deemed too dangerous for humans. It features a number of characters who are voiced by actors who will reprise their roles in subsequent live-action DCU projects. For example, Frank Grillo plays Rick Flag Sr. in Creature Commandos and the upcoming Superman movie.Creature Commandos Season 2 is confirmed.Its certainly been well-received, with IGNs Creature Commandos review returning an 8/10. We said: Creature Commandos introduces the new DCU with a story only James Gunn could cook up, balanced by high-energy animated action, outrageous humor, and strong, emotional character arcs.Amy Gravitt, Executive Vice President, HBO & Max Comedy Programming, commented: Only James Gunn could have conjured this wild band of misfit monsters who tug at your heart and force you to root passionately for them. We couldnt be more delighted to continue their stories with James, Dean Lorey, Peter Safran and our fantastic partners at DC Studios and Warner Bros. Animation. James Gunn and Peter Safran also offered a quote: We're thrilled to team up with Max for another season of Creature Commandos mayhem. From our spectacular first season of Peacemaker to the astonishing run of The Penguin to the record-breaking launch of Creature Commandos, Max has consistently delivered above industry expectations and beyond our wildest imaginings. Thank you, Casey, Sarah, Pia, Sono and the entire team for your tremendous support of DC Studios. We are proud to call Max home. Season 1 stars Steve Agee as Economos, Maria Bakalova as Princess Ilana, Anya Chalotra as Circe, Zoe Chao as Nina Mazursky, the aforementioned Frank Grillo as Rick Flag Sr., Sean Gunn as GI Robot & Weasel, David Harbour as Frankenstein, Alan Tudyk as Dr. Phosphorus, Indira Varma as The Bride, and Viola Davis as Amanda Waller.Before Creature Commandos Season 2 comes out, the DCU continues with July 2025s hotly anticipated Superman movie. IGN has an explainer on all the DC Heroes and Villains in the new Superman trailer, comments from James Gunn on Krypto actually being a pretty terrible dog in the movie, thoughts on how Superman is about hope, and more.Photograph by Courtesy of Max.Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.
    0 Yorumlar 0 hisse senetleri 17 Views
  • NEWS.XBOX.COM
    Game Pass Choose the Right Plan for Your Gaming Needs
    Game Pass is all about options. Want the total gaming experience? What about high-quality games on PC? Or do you just want to play with your friends? Whatever type of player you are, Game Pass has you covered with an ever-expanding set of top-tier games.Heres a one-stop shopping guide to Game Pass to help you choose the plan thats right for you:PC Game Pass Want to play high-quality games on PC? PC Game Pass offers hundreds of games, including new titles on day one like Call of Duty: Black Ops 6 and Indiana Jones and the Great Circle. Plus, get an EA Play membership, Riot Games benefits in Valorant, League of Legends, and Teamfight Tactics, and more. Members also enjoy discounts of up to 20% on many games. If youre a PC player, then PC Game Pass is designed for you.Game Pass Core Is your main interest jumping on your Xbox console and challenging your friends online or teaming up to take down a final boss? Game Pass Core is a great avenue for online console multiplayer. For a low monthly price, play with others online, get member deals of up to 50% off select titles, and enjoy a select catalog of over 25 high-quality games in the Game Pass library, including Grounded, Among Us, Halo 5: Guardians, Gears 5, and more.Game Pass Standard Are you looking to level up your Xbox console gaming experience at a great value? Then Game Pass Standard may be right for you. In addition to all of the benefits you receive with Game Pass Core, including online console multiplayer and member discounts, you get access to hundreds of high-quality console games in the Game Pass library. Legendary series like Halo or Age of Empires, and massively popular games such as Minecraft, Forza Horizon 5, and Tom Clancys Rainbow Six Siege are ready for you to play. Dive in and discover your next favorite game today.Game Pass Ultimate Are you a player who wants the total gaming experience? Then Game Pass Ultimate has what you need with all the Game Pass benefits we have to offer. Enjoy hundreds of high-quality games such as Call of Duty: Black Ops 6, Starfield, and Diablo IV, including new games on day one like S.T.A.L.K.E.R. 2: Heart of Chornobyl and Indiana Jones and the Great Circle on your console, PC, and cloud. Get beloved series like EA Sports F1, Battlefield, and Star Wars with EA Play. Get a head start in Valorant, League of Legends, and more of the biggest PC and mobile games from Riot Games. Join friends and play together with online console multiplayer. Experience premium member benefits with deals and discounts on games and add-ons, free Perks, and more. Stream a game with cloud gaming before you download it on your console no installs required. With Game Pass Ultimate, theres always something new to play.We continue to be focused on delivering the best gaming experience at a range of price points so players can choose the plan with the features that best fits their gaming needs and budget. If you are interested in learning more about Xbox Game Pass and pricing for each plan, please visit our main page.
    0 Yorumlar 0 hisse senetleri 16 Views
  • 9TO5MAC.COM
    How MacPaw is making cybersecurity accessible to everyone; my exclusive interview from Kyiv
    9to5Mac Security Bite is exclusively brought to you by Mosyle,the only Apple Unified Platform. Making Apple devices work-ready and enterprise-safe is all we do. Our unique integrated approach to management and security combines state-of-the-art Apple-specific security solutions for fully automated Hardening & Compliance, Next Generation EDR, AI-powered Zero Trust, and exclusive Privilege Management with the most powerful and modern Apple MDM on the market. The result is a totally automated Apple Unified Platform currently trusted by over 45,000 organizations to make millions of Apple devices work-ready with no effort and at an affordable cost. Request your EXTENDED TRIAL today and understand why Mosyle is everything you need to work with Apple.Ive been a CleanMyMac subscriber for nearly a decade, and Ive been truly impressed by the apps heavy focus on providing Mac users with remarkably simple yet effective malware detection and prevention features. So, when MacPaw offered to fly me out to Kyiv, Ukraine, to meet and interview the folks leading Moonlock, its cybersecurity division, I jumped at the opportunity.This interview is divided into three parts: About Moonlock, the technology behind the Moonlock Engine, and whats planned for the future.Disclosure: Ukraine is a country at war. Many members of the Moonlock team also aid in the defense of their country, so false names may be used below to protect their identity. Some parts of the transcript were edited for clarity.Youre reading Security Bite, a security-focused column on 9to5Mac. Each week, Arin Waichulis delivers insights and interviews on the latest in data privacy, the current malware landscape, and emerging threats within Apples vast ecosystem of over 2 billion active devices.At the time of writing, MacPaws HQ, the very place where this interview was conducted weeks prior, was just severely damaged in a ballistic missile attack. My heart goes out to the team. Please consider supporting MacPaws relief effort here.With that out of the way, heres my full interview. In the room: Oleg (head of product for Moonlock), Borys (head of Moonlock Lab, research division), Anastasiia (senior PR specialist at Moonlock), and myself. Q: Could you tell me what the inspiration was for MacPaw to open a cybersecurity division?From Oleg, head of product for MacPaws Moonlock:It became clear that after the first malware detection modules were added to CleanMyMacX, this was a much bigger topic than we initially thoughtwed only scratched the surface.We started asking ourselves: why not build something better and more comprehensive? This vision evolved into Moonlock. Unlike other cybersecurity companies focused on businesses or Windows systems, weve been working with Macs for years, so it felt like a natural fit. Additionally, many Mac users have the misconception that Macs are immune to viruses or malware, which isnt true.The next logical step for MacPaw was to address this gap. We were already cleaning machines and removing malicious files, so why not take it further and prevent them from causing harm in the first place?Q: Got it. And the mission of Moonlockwhats the focus?Oleg:The mission of Moonlock is to make cybersecurity accessible to everyone. When we talk to users, they often express awareness about cybersecurity and sometimes concerns, but they rarely take proactive steps to protect themselvesunless theyve already experienced an incident.For many users, an incident acts as a wake-up call. Before that, even if theyve heard about cybersecurity threats, they often take a passive approach because theyre unsure where to start or dont have the time to learn.Thats where Moonlock comes in. We aim to bridge that gap. Cybersecurity concepts can have a steep learning curve, but we believe we can provide tools that protect users without requiring them to become experts.CleanMyMac is perceived as a simple yet powerful tool. We want to bring the same philosophy to Moonlock. Its about creating solutions that are easy to usemaybe just a couple of clicksbut still incredibly effective.Q: Moving on to the technology, can you explain what the Moonlock Engine does?Oleg: The Moonlock engine is specifically designed for Macs. Its built by engineers who understand macOS, including how malware can persist and infect systems. This deep expertise allows us to tailor the engine to address Mac-specific threats effectively.One of its most significant advantages is that its integrated into CleanMyMac. So, any user who installs CleanMyMac, even for cleaning purposes, automatically benefits from the built-in security features.On the technical side, the engine uses a combination of static and dynamic analysis. Static analysis involves examining the code itself, while dynamic analysis involves running the code in a virtual environment to observe its behavior. This dual approach is crucial because some malware is designed to sleep for weeks or months, making it harder to detect.Weve also balanced thorough scanning with performance. For example, we have a fast scan that quickly checks the most common locations for malware and a deeper scan that examines additional areas and file types.Q: Are there any new security features in the new redesigned CleanMyMac?Oleg:Were not adding new major security features to CleanMyMac at this time, but were constantly updating the engine behind the scenes. Its not radically new, but it improves with each update. Were updating databases frequently to catch top-layer threats, adding signatures, and modifying detection methods to keep up with malware authors. Its always a cat-and-mouse game.Apple does a good job at stopping malware for the most part. They have protection tools built into the system, like XProtect and Gatekeeper. But users still click links or launch suspicious things, and thats where we try to help prevent them from doing dangerous things.Borys, head of Moonlocks research division, Moonlock Lab:In MoonLock Labs, we study not just samples or malicious code, but try to understand the intent behind malware authors. Were living in an age with technologies that can hide, obfuscate, and mutate code. If authors use ChatGPT or neural networks to mutate code, they can generate many variants no one can understand from simple observation.We focus on understanding malware behavior and improve our technology to collect and study samples through their behavior. You can study code statically by viewing it, or dynamically by running it in a virtual environment. Malware can sleep for days, weeks, or months, so even improved sandboxes cant always reveal malicious behavior.A recent trend is malware-as-a-service. Someone can write malicious code without commercial purposes and sell it on dark web marketplaces for Bitcoin. This makes it more dangerous because now people who cant write malware can purchase and execute it.Q: Are you seeing an increase in criminal activity in specific regionsmaybe Russia?Borys: Attribution is the most challenging thing. You cant always tell from the code that its Russian, Chinese, or North Korean. Through research and diving into C2 servers, comparing code elements on GitHub or the dark web, you can follow the trail to understand its origin. Its like being an investigator.IP addresses arent absolutely useful because Russia uses expansion techniques. They capture IP addresses, deface sites in any country, hack infrastructure, and convert it to proxies. Botnets created from poorly protected smart devices are common. Theres legislation coming to make manufacturers adhere to security standards, as many devices still use default admin passwords.Oleg: The Mac market seems to be going through all the same stages as Windows did, just decades later and more rapidly. Its like season two of the same series on a different platform. Windows researchers can apply their knowledge to quickly address these problems before they become as huge as on Windows.Q: Are there plans to spin MoonLock off CleanMyMac into its own product, like an EDR solution?Oleg: We are currently working on a product like that. Weve talked about it during the MoonLock launch converting our knowledge and observations into practical help for users. Our first step was improving CleanMyMacs removal into the MoonLock engine to protect millions of users immediately.Were building to execute our vision of making cybersecurity accessible to every Mac user, making it more sophisticated, capable, yet easy to understand and approachable. It takes time. The main challenge isnt just making security tools, but inspiring users to implement them and change their habits.People often treat cybersecurity as boring or too complicated. We want to make it colorful and easy to use, like CleanMyMac where users dont need to think about steps, it just works. But its more complicated because with cybersecurity, if you have a problem, its already too late. Its like vaccines you need them before problems occur.End.I want to give special thanks to Anastasiia at MacPaw for organizing a flawless and safe trip during such a tumultuous time in Ukraine. The team at MacPaw is world-class. I can best describe the company as the Google of Ukraine. Seriously.More in Apple securityA newly-released app lets you regularly scan your iPhone for Pegasus spyware which can access almost all the data on a phone for a one-off cost of just one dollar.Moonlock Lab released its 2024 Threat Report, detailing how AI tools like ChatGPT are helping to write malware scripts, the shift to Malware-as-a-Service (MaaS), and other interesting statistics its seeing through internal data.Apples Passwords app now has a Firefox extension for Mac. Interestingly, a Reddit thread reveals that this extension appears to have been created by a third-party developer. But Apple appears to have taken it over under its branding and name.Mosyle exclusively reveals to 9to5Mac details on a new family of Mac malware loaders. Mosyles Security Research team discovered these new threats are written in unconventional programming languages and use several other sneaky techniques to evade detection.Follow Arin: Twitter/X, LinkedIn, ThreadsAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Yorumlar 0 hisse senetleri 15 Views
  • FUTURISM.COM
    Dead OpenAI Whistleblower Had Been Named as Potential Witness in Lawsuit Against Employer
    He said he'd "try to testify."Spill the BeansSuchir Balaji, the young OpenAI whistleblower whose death was made public earlier this month, was apparently being considered as a witness against his former employer in a major lawsuit, The Associated Press reports.Shortly before his passing, the 26-year old Balaji had sounded the alarm on OpenAI's allegedly illegal copyright practices in an October profile with The New York Times.But according to the report, his involvement with the newspaper of record wasn't set to end there.AP that he would "try to testify" in the strongest copyright infringement cases brought against OpenAI, and considered the NYT's high-profile one, filed last year, to be the "most serious."The Times seems to have had the same idea. In a November 18 court filing, lawyers for the newspaper named Blaji as someone who might possess "unique and relevant documents" that could prove OpenAI knowingly committed copyright infringement.Tragic DeathBalaji had worked at OpenAI for four years, but quit in August after becoming appalled at what he saw as the ChatGPT developer's flagrant disregard for copyright law. He had worked first-hand on the company's massive data scraping efforts, in which it more or less pulled any content it could from the web to train its large language models."If you believe what I believe," Balaji told the NYT, "you have to just leave the company."On November 26, a month after his profile in the NYT, Balaji was found dead in his San Francisco apartment, in what the police said was an apparent suicide. His death wasn't reported until December 13.Publicly, OpenAI mourned Balaji's passing. "We are devastated to learn of this incredibly sad news today, and our hearts go out to Suchir's loved ones during this difficult time," a company spokesperson told CNBC at the time.Following SuitThe high-profile lawsuit that Balaji was being considered as a witness for was filed by the NYT last December, alleging that OpenAI had illegally used the newspaper's copyrighted work to train its chatbots. Balaji's documents were also being sought by another suit filed by comedian Sarah Silverman against OpenAI and Meta, the AP said.OpenAI and other tech companies argue that their use of copyrighted data on the internet constitutes "fair use" because their AI models significantly transform that content. But Balaji disagreed, saying that the AI models create a copy of the data they ingest, and are from there instructed to generate text of dubious originality."The outputs aren't exact copies of the inputs, but they are also not fundamentally novel," he told the NYT in October.Balaji's family said that a memorial is being planned for later this month at the India Community Center in Milpitas, California.Share This Article
    0 Yorumlar 0 hisse senetleri 14 Views
  • SCREENCRUSH.COM
    The Strangest Character Posters of 2024
    Im not sure who created the now-pervasive practice of releasing six or eight or 40 character posters for every Hollywood movie. (Someoneshould figure that out andwrite a lengthy magazine profile onthat person.) All I know is you cant go to a multiplex these days without being inundated with countless posters for movies, many of them featuring actors you barely recognize playing characters youve never heard of before.There must be extensive studies or focus-group reports out there that prove this is a good way to market a movie, or at least to get a boatload of extra coverage on film blogs. And in 2024, even with fewer big blockbusters as a result of 2023s dual labor strikes in Hollywood, the practice showed so sign of slowing down. Even mid-level movies now sometimes get whole fleets of character posters. (You wouldnt believe how manyNosferatu posters there were if I told you.)All year, any timeI visited movie theaters around New York City, Ive kept a running list of the strangest examples; the oneswith incredibly minor characters, oramusing photographs,or Zendaya Is Meechee-esque phrases. With 2025 almost upon us, it is time to unveil my list of the 20 strangest character posters of the year. I cant wait to see what surprises the new movie year has in store for us. And please, whatever else you do, always remember: Dwayne Johnson is Callum Drift.The Weirdest Character Posters of 2024Hollywood studios inundate us with posters for every character in their big movies. Let us take a minute to honor the strangest ones we got in 2024.READ MORE: Ridiculous Character Posters From Previous YearsGet our free mobile appThe Dad Movie Canon: 25 Films All Dads LoveNo dad can resist these 25 movie classics.Filed Under: Best of 2024, IF, Madame Web, Red One, Sonic the Hedgehog 3, The Fall GuyCategories: Galleries, Lists, Movie News, Original Features
    0 Yorumlar 0 hisse senetleri 22 Views
  • WWW.CNET.COM
    The Coziest TV Shows to Binge This Winter Season
    The leaves have fallen, there's a chill in the air and the sun retires early. We all know what that means -- winter has arrived, and it's officially cozy season.Luckily, there are ways to combat potential winter blues. Popular Scandinavian concepts like hygge, koselig and fredagsmys offer simple approaches to embracing and even enjoying the colder months. AleksandarNakic/Getty ImagesSimilar in essence, these phrases refer to the notions of slowing down and enjoying the simple pleasures in life. It's spending quality time with loved ones (or yourself). Its creating a sense of warmth and comfort by snuggling up with a blanket and hot cocoa by a roaring fire. Its decompressing after the work week with relaxing activities. It's about taking the time to unwind, and one of the easiest ways to do that is by cuddling up and watching a cozy movie or TV show.You probably already have a "comfort" show. It's the one you turn to time and time again, and the familiarity of it helps you de-stress. Whether it's soothing character voices, calming music or simply easy listening because you've seen it all before, comfort shows help us decompress from our busy and chaotic world.Our sleep and wellness experts at CNET are no exception. Here are the shows we put on to attain that sense of hygge -- the shows we rewatch to help us relax and stay cozy.Read more: Best Streaming Services of 2024Disclaimer: We typically don't recommend watching TV or movies while actively trying to fall asleep. Most experts agree that it's best to avoid all screen time for at least an hour before bed. We suggest shutting off your comfort show and giving your brain a break before actually trying to sleep. If watching your comfort show in bed helps you fall asleep, that's okay. What works for one person won't always work for the next, and whatever enables you to get your best rest is worth it. Over the Garden Wall/Hulu Over the Garden Wall (2014) Caroline Igo's comfort show CNET expert: Caroline Igo, sleep editorWhere to watch: Hulu, Prime VideoI had heard such great things about Over the Garden Wall that I was convinced to try it. I was told it was relaxing and charming and the animation art style was beautiful -- I wasn't disappointed. The main characters are two brothers, one voiced by Elijah Woods, who are lost and trying to make it out of the woods. Each episode is a mini adventure that they embark on together. It's cute and heartwarming, and the original background music is alluring and peaceful, Caroline explains. Caroline agrees with the sentiment of keeping the TV off in the bedroom: I like to watch an episode on the couch before bed in order to wind down from my day. I try not to watch TV in my bed so that my body associates my bed as the only place for sleep. Gilmore Girls/Netflix Gilmore Girls (2000-2007) Nasha Addarich Martnez's comfort show CNET expert: Nasha Addarich Martnez, managing editorWhere to watch: Netflix"You can't go wrong with falling asleep to Gilmore Girls. It's a cozy show where nothing too exciting happens, so you don't have the FOMO of a sudden plot twist. The show follows a heartwarming mother-daughter duo experiencing life in a small town. Lorelai's humor is incredible and each character is truly charming. There are also no jump scares, ultra-upbeat music or sounds that'll wake you from your slumber if you do fall asleep while watching, Nasha explains. Read more: 21 TV Shows on Netflix Perfect for Your Next Binge-Watch Frasier/NBC/Hulu Frasier (1993-2004) Owen Poole's comfort show CNET expert: Owen Poole, senior video producerWhere to watch: Hulu, Pluto TV, Disney Plus"Frasier is my all-time favorite sitcom and the perfect show to relax to before bed. The combination of the setting -- Frasier's beautifully designed apartment and a fancy Seattle cafe -- the classical and smooth jazz music and the pacing of each episode is calming and familiar. The show never really has "high-stakes" episodes that ramp up the stress for the audience, including an episode where the Brothers Crane (Frasier and Niles) try to find their way into a more exclusive section of their favorite spa before being told they need to stay in the "relaxation grotto." Watching Frasier feels like my own personal relaxation grotto -- it's the perfect before-bed TV show," Owen says.It turns out that snoozing to Frasier is common. There's an entire online community of dedicated watchers who fall asleep to the sitcom, including a subreddit of nearly 6,500 members. Forensic Files/Peacock Forensic Files (1996-2011) Taylor Leamey's comfort show CNET expert: Taylor Leamey, senior writerWhere to watch: Peacock, TubiAlthough it may seem surprising or counterintuitive, true crime documentary series and podcasts can help people fall asleep. Taylor knows this well, as watching Forensic Files makes her instantly sleepy.My go-to show for winding down to go to bed is Forensic Files. If Im having difficulty falling asleep, I put it on, and Im asleep within thirty minutes. And I know what youre thinking: weird thing to go to sleep to, Taylor. But for me, its less about the content of the show and more about the narration voice. Peter Thomas has such a slow and consistent voice that it lulls me into the perfect sleepy state. Its like my brain doesnt even register what hes saying; instead, it focuses on the tone of his voice, Taylor says. Joe Pera Talks With You/Max Joe Pera Talks With You (2018-2021) JD Christison's comfort show CNET expert: JD Christison, senior video producerWhere to watch: MaxTaylor is not alone in choosing Forensic Files. JDs partner Steph also turns to the true crime show as her go-to comfort watch. I know, weird choice, but I find the pacing of Forensic Files and the narrators voice to be super calming. Plus, its got that early 2000s charm! Steph exclaims.On the flip side, JDs genre of choice is an American comedy television series created for Adult Swim. My favorite show for relaxing is Joe Pera Talks With You. I like it because its a very innocent show where Joe Pera just talks about his perspective on the simple things in life that he enjoys. It puts me right out, in a good way, JD says. The Great British Baking Show/Netflix The Great British Baking Show (2014-) Erica Devaney's comfort show CNET expert: Erica Devaney, editorial directorWhere to watch: NetflixErica's go-to TV show before bed is The Great British Baking Show. "It's so calming and wholesome. It's a competition, but everyone is so nice." There's no yelling or loud fights like you'll find in the American version and other reality-type competition shows. A Reddit user expressed the same sentiment: "There is a gentleness that permeates the show. The judges, the contestants... are nice. It's a really relaxing watch."Erica also explains, "I hate waking up to something scary or stressful, so I put on a channel like Food Network or HGTV before bed because those are guaranteed not to be scary. I keep my TV on a barely audible sound level to go to sleep." If you also like to use the comforting background hum of the TV as white noise to fall asleep, it's best to keep it at a super low volume level.If you've ever been personally victimized by a show's unnecessarily loud theme song or credits (The Office, I'm looking at you) or a chaotic scene that plays at a much higher decibel level than the rest of the show, you're not alone. That's why choosing one with a steady soundtrack and consistent loudness or volume throughout can keep you from waking up abruptly if you fall asleep while watching. Read more: Best White Noise Machines of 2024 Bob's Burgers/Hulu Bob's Burgers (2011-) Anna Gragert's comfort show CNET expert: Anna Gragert, wellness editorWhere to watch: Hulu, FoxIt may seem like a no-brainer, but watching lighthearted shows or movies can help lower cortisol levels -- especially if it makes you laugh. A good chuckle releases feel-good chemicals in your brain, like dopamine and serotonin, which relieve stress and improve your mood.It makes sense, then, why Anna turns to funny and lighthearted shows to relax. "I basically only watch comedies, and while I am not usually a fan of animated TV shows -- I struggle with uncanny valley -- Bob's Burgers has stolen my heart. It's funny, heartwarming and relatable, and despite the difficulties the Belchers go through, they always make it through together. I will forever feel happy when I'm watching this show. I even watch old episodes if I need a dose of comfort," Anna says.Another show Anna recommends for unwinding at night is Abbott Elementary. "This show has everything you could possibly want in a comfort watch: friendship, comedy, love, community and family. It reminds you of your favorite coworkers who help you get through tough days with a laugh, and is there anything more comforting than that?" Abbott Elementary/Hulu Abbott Elementary (2021-) Giselle Castro-Sloboda's comfort show CNET expert: Giselle Castro-Sloboda, wellness and fitness writerWhere to watch: Hulu, MaxGiselle echoed Anna's preference for lighthearted shows like Abbott Elementary. "There's a lot of negativity in the world and humor is one of the best ways to make better use of that energy," Giselle says.Like many of us, Giselle has multiple go-to comfort shows that she cycles between. "I like watching reruns of popular sitcoms from the '90s, like The Fresh Prince of Bel-Air, since it reminds me of my childhood. I also love shows like Frasier and The Golden Girls. During my parental leave last year, I binge-watched all eight seasons of Who's The Boss? I enjoyed the cast and how relevant some central topics are still today," she explains.Whether it's the cheesy laugh track, grainier film look, sepia-style color grading or the lack of technological devices in the episodes, sitcoms from the '80s and '90s tend to provide a level of simplicity and nostalgia that newer shows can't -- a blissful escape from the daily pressures of modern life. Drew Simms/YouTube Drew Simms YouTube Channel Dillon Payne's comfort show CNET expert: Dillon Payne, director of video productionWhere to watch: YouTubeAs a video director who creates content for YouTube, Dillon spends much more time watching YouTube than other streaming services."One YouTube channel I view while winding down is Drew Simms. Drew is a freelance photographer and filmmaker who specializes in documenting beautiful landscapes. His cinematography is stunning, and the visuals he captures are awe-inspiring. His calming sound mixing helps encapsulate the ethereal beauty of the location he is documenting. If you are a person who enjoys the beauty of the outdoors and is looking for a channel filled with adventure, wildlife and epic landscapes, I recommend checking out his channel before you take a snooze," Dillon describes.Listening to nature sounds, such as a babbling brook, wind in the trees, chirping birds or calm ocean waves, has many positive effects on the body. It helps reduce our "fight or flight response," lowering our heart rate and relaxing the sympathetic nervous system so we can enter a calmer state of mind -- which will help us fall asleep. The Office/Peacock The Office (2005-2013) Dillon Lopez's comfort show CNET expert: Dillon Lopez, senior video producerWhere to watch: Peacock"I'm doing the math now... and I can't believe The Office has been my favorite show for the past 17 years. It's hilarious, heartwarming and so familiar to me now that it's super comforting. Even though I've seen the episodes a million times, it puts me in a good mood. It's a mindless watch that I don't have to focus any brain power on, which helps me relax," Dillon explains.In a similar vein, Dillon has recently been falling asleep while watching The Detroiters. "It's funny and lighthearted. I love the dynamic between Tim and Sam, the two main characters. It's another show that instantly puts me in a good mood and helps me shut my brain off at the end of the day," he says. Schitt's Creek/Hulu Schitt's Creek (2015-2020) Jessica Rendall's comfort show CNET expert: Jessica Rendall, wellness writerWhere to watch: Hulu, Freevee, Disney PlusSchitt's Creek is another heartwarming show with a large dedicated fanbase on Reddit. Many of its fans fall asleep to the cozy sitcom, just like Jessica."Schitt's Creek is my favorite relaxing show because it's my favorite show, period. I'm both embarrassed and proud to say that I've watched the series multiple times over. I find it comforting because of the rich character development that allows an easy slip into the world. The big heart of the show makes for easy watching and low stress ahead of bedtime," Jessica explains. Read more: The 25 Absolute Best TV Shows to Watch on Max Planet Earth/BBC/Max Planet Earth (2006-2023) Aly Lopez's comfort show CNET expert: Aly Lopez, sleep writerWhere to watch: Max, Discovery PlusWhen I want to watch something that will knock me out, I turn to documentaries or docuseries about nature, space, travel or history. Despite being interested in the topic, its usually less than 10 minutes before my eyelids are so heavy that I physically cannot keep them open anymore.Whether it's Our Planet or Planet Earth -- or any of the other "Planet" shows -- the combination of nature sounds, calming music and David Attenborough's endearing voice carries me to a tranquil sleep like a gently lolling sailboat on calm ocean waves.A weirder show that sends me into outer space (dreamland) is Ancient Aliens on the History Channel, primarily due to Robert Clotworthy's soothing narrations. I can't even tell you one thing thats discussed in this strange and controversial series because it immediately knocks me out whenever I put it on the TV. Australian Survivor/Amazon Australian Survivor (2002-) Wes Ott's comfort show CNET expert: Wes Ott, senior video producerWhere to watch: 10PlayFrom sports to reality to cartoons, various genres of shows help Wes relax at night. "There is nothing more soothing than SportsCenter replaying the same episode for hours after a good game ends. But my favourite show ever is Australian Survivor. I could never fall asleep during an episode because it's so good, but once it has ended, there's no way my day can get any better, so I know it must be time for bed," Wes explains.Wes also enjoys unwinding with Rick and Morty before bed. "Sometimes you need a quick little adventure to prepare for a sleepy night-night time," he says.Note: Australian Survivor is now geo-restricted, so streaming it in the US will require a VPN. Check out our Best VPN for Streaming in 2024. Family Guy/Hulu Family Guy (1999-) Jon Gomez's comfort show CNET expert: Jon Gomez, video producerWhere to watch: Jon has watched his comfort shows many times. "At this point, the familiarity of having watched all the episodes feels like white noise in the background that helps me sleep. It makes me and my partner feel comfortable and relaxed," he explains. The shows Jon prefers to watch at night include Family Guy, Futurama, George Lopez and The Fresh Prince of Bel-Air. Having a few go-to shows of different genres or animation styles for unwinding before bed can help remove the pressure of scrolling through the countless options and trying to decide what to watch. For more ways to relax and unwind before bed:
    0 Yorumlar 0 hisse senetleri 15 Views
  • WWW.CNET.COM
    Best iPhone 13, iPhone 13 Pro and iPhone 13 Pro Max Cases of 2024
    These covers won't just protect your phone from damage but will also enhance the overall look and feel.
    0 Yorumlar 0 hisse senetleri 15 Views
  • WWW.BLENDERNATION.COM
    BlenderNation is on Holiday Break
    Hey everyone, it has been a really busy year and I need some time to recharge. We'll be back after January 6 with more Blender news. For now, get some rest, enjoy time with those close to you and we'll see you in the new year!Source
    0 Yorumlar 0 hisse senetleri 14 Views