• WWW.MARKTECHPOST.COM
    Chat with Your Documents Using Retrieval-Augmented Generation (RAG)
    Imagine having a personal chatbot that can answer questions directly from your documentsbe it PDFs, research papers, or books. With Retrieval-Augmented Generation (RAG), this is not only possible but also straightforward to implement. In this tutorial, well learn how to build a chatbot that interacts with your documents, like PDFs, using Retrieval-Augmented Generation (RAG). Well use Groq for language model inference, Chroma as the vector store, and Gradio for the user interface.By the end, youll have a chatbot capable of answering questions directly from your documents, keeping context of your conversation, and providing concise, accurate answers.What is Retrieval-Augmented Generation (RAG)?Retrieval-Augmented Generation (RAG) is an AI architecture that enhances the capabilities of Large Language Models (LLMs) by integrating an information retrieval system. This system fetches relevant data from external sources, providing the LLM with grounded information to generate more accurate and contextually appropriate responses. By combining the generative abilities of LLMs with real-time data retrieval, RAG reduces inaccuracies and ensures up-to-date information in AI-generated content.PrerequisitesPython Installation: Ensure Python 3.9+ is installed on your system.Groq API Key: Sign up for a Groq account and generate an API key:Visit Groq Console.Navigate to API Keys and create a new key.Copy your API key for use in the project.Dependencies: Install the required libraries:pip install langchain langchain-community langchain-groq gradio sentence-transformers PyPDF2 chromadbThese libraries will help with language processing, building the user interface, model integration, PDF handling, and vector database management.Downloading the PDF ResourceFor this tutorial, well use a publicly available PDF containing information about diseases, their symptoms, and cures. Download the PDF and save it in your project directory (you are free to use any pdf).Step 1: Extracting Text from the PDFWell use PyPDF2 to extract text from the PDF:from PyPDF2 import PdfReaderdef extract_text_from_pdf(pdf_path): reader = PdfReader(pdf_path) text = "" for page in reader.pages: text += page.extract_text() return textpdf_path = 'diseases.pdf' # Replace with your PDF pathpdf_text = extract_text_from_pdf(pdf_path)Step 2: Split the Text into ChunksLong documents are divided into smaller, manageable chunks for processing.from langchain.text_splitter import RecursiveCharacterTextSplitterdef split_text_into_chunks(text, chunk_size=2000, chunk_overlap=200): text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap ) return text_splitter.split_text(text)text_chunks = split_text_into_chunks(pdf_text)Step 3: Create a Vector Store with ChromaWell embed the text chunks using a pre-trained model and store them in a Chroma vector database.from langchain.embeddings import SentenceTransformerEmbeddingsfrom langchain.vectorstores import Chromaembedding_model = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")vector_store = Chroma( collection_name="disease_info", embedding_function=embedding_model, persist_directory="./chroma_db")vector_store.add_texts(texts=text_chunks)Step 4: Initialize the Groq Language ModelTo use Groqs language model, set your API key and initialize the ChatGroq instance.import osfrom langchain_groq import ChatGroqos.environ["GROQ_API_KEY"] = 'your_groq_api_key_here' # Replace with your API keyllm = ChatGroq(model="mixtral-8x7b-32768", temperature=0.1)Step 5: Create the Conversational Retrieval ChainWith LangChains ConversationalRetrievalChain, we can link the language model and the vector database.from langchain.chains import ConversationalRetrievalChainretrieval_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=vector_store.as_retriever(topk=3), return_source_documents=True)Step 6: Implement the Chatbot LogicWe define the logic for maintaining conversation history and generating responses.conversation_history = []def get_response(user_query): response = retrieval_chain({ "question": user_query, "chat_history": conversation_history }) conversation_history.append((user_query, response['answer'])) return response['answer']Step 7: Build the User Interface with GradioFinally, create a Gradio interface to interact with the chatbot.import gradio as grdef chat_interface(user_input, history): response = get_response(user_input) history.append((user_input, response)) return history, historywith gr.Blocks() as demo: chatbot = gr.Chatbot() state = gr.State([]) with gr.Row(): user_input = gr.Textbox(show_label=False, placeholder="Enter your question...") submit_btn = gr.Button("Send") submit_btn.click(chat_interface, inputs=[user_input, state], outputs=[chatbot, state])Running the CodeSave the script as app.py and runpython app.pyHurray! You are done. The Gradio interface will launch, allowing you to chat with your document.But why stop here? You can go further by trying to build any of the following functionalities in the chatbot.Enhanced Vector Store: Use other vector databases like Milvus or Pinecone for scalability.Fine-tuned Models: Experiment with fine-tuned Groq models for domain-specific accuracy.Multi-Document Support: Extend the system to handle multiple documents.Better Context Handling: Refine conversational logic to better manage longer chat histories.Custom UI: Design a more polished user interface with advanced styling and features.Congratulations! Youve successfully built a document-based chatbot using Groq and LangChain. Experiment with improvements and build something amazing! Resources:https://nios.ac.in/media/documents/SrSec314NewE/Lesson-29.pdfLangChain (https://www.langchain.com/)Groq (https://groq.com/)Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our65k+ ML SubReddit.(Promoted) Vineet Kumar+ postsVineet Kumar is a consulting intern at MarktechPost. He is currently pursuing his BS from the Indian Institute of Technology(IIT), Kanpur. He is a Machine Learning enthusiast. He is passionate about research and the latest advancements in Deep Learning, Computer Vision, and related fields. Meet 'Height':The only autonomous project management tool (Sponsored)
    0 Commentaires 0 Parts 148 Vue
  • TOWARDSAI.NET
    Unlocking the Advantages of Semantic Chunking to Supercharge Your RAG Models
    Author(s): Aditya Baser Originally published on Towards AI. 1. Introduction1.1. What is chunking, and why do we need it?The intuition behind chunking and how it helps in the retrieval of informationImagine you are searching for a specific piece of information in a vast library. If the books are arranged haphazardly some with irrelevant sections bound together and others with critical pages scattered across volumes youd spend a frustrating amount of time flipping through unrelated content. Now, consider a library where each book is carefully organized by topic, with coherent sections that neatly encapsulate a single idea or concept. This is the intuition behind chunking in the context of retrieval-augmented generation (RAG): its about organizing information so it can be easily retrieved and understood.RAG Workflow Our emphasis would be on understanding chunkingChunking refers to the process of dividing large bodies of text into smaller, self-contained segments called chunks. Each chunk is designed to encapsulate a coherent unit of information that can be efficiently stored, retrieved, and used for downstream tasks like search, indexing, or contextual input for an LLM.1.2. What are the different types of chunking methods?Extending the library analogy, imagine you walk into the library to find information about The Effects of Climate Change on Marine Life. The way the books are organized will determine how easily you can find the specific information youre looking for:1.2.1. Fixed-Length ChunkingEvery book in the library is arbitrarily divided into fixed-sized sections, say, 100 pages each. No matter what the content is, each section stops at the 100-page mark. As a result, a chapter about coral bleaching might be split across two sections, leaving you scrambling to piece together the full information.Fixed-Length Chunking splits the text into chunks based on a fixed token, word, or character count. While this method is simple to implement, it often causes relevant information to be split among different chunks or for the same chunk to have information on different topics, making retrieval less accurate.1.2.2. Recursive Chunking (Hierarchical)The books are structured into sections, chapters, and paragraphs following their natural hierarchy. For instance, a book on climate change might have sections on global warming, rising sea levels, and marine ecosystems. However, if a section about marine life is too large, it may remain unwieldy and difficult to search through quickly.Recursive chunking breaks text hierarchically, following natural structures such as chapters, sections, or paragraphs. While it preserves the natural structure of the document it would lead to chunks that are too large for cases when sections are lengthy and poorly organized.1.2.3. Semantic ChunkingIn this case, the books are reorganized based on meaning and topic coherence. Instead of rigidly splitting sections by length or following a strict hierarchy, every section focuses on a specific topic or concept. For example, a section might cover The Impact of Rising Temperatures on Coral Reefs in its entirety, regardless of length, ensuring all related content stays together. As a result, you can retrieve exactly what you need without having to sift through unrelated material.Semantic chunking uses meaning or context to define chunk boundaries, often leveraging embeddings or similarity measures to detect where one topic ends and another begins.2. Semantic Chunking: 101Semantic chunking involves breaking text into smaller, meaningful units (chunks) that retain context and meaning.2.1. Why Semantic Chunking is SuperiorSemantic chunking stands out among chunking methods because it optimizes the retrieval process for contextual relevance, precision, and user satisfaction. In retrieval-augmented generation (RAG), where the goal is to feed highly relevant and coherent information into a large language model (LLM), semantic chunking eliminates many pitfalls associated with fixed-length and hierarchical approaches.Lets explore the unique advantages of semantic chunking and why it is crucial for building high-performance RAG systems.2.1.1. Context PreservationSemantic chunking ensures that each chunk contains complete, self-contained information related to a single topic. This contrasts with fixed-length chunking, were arbitrary boundaries often split context, leading to incomplete or fragmented information retrieval. When feeding an LLM, context completeness is critical. Missing context forces the LLM to hallucinate or generate suboptimal answers, while semantic chunking minimizes this risk by delivering coherent inputs.2.1.2. Improved Retrieval PrecisionSemantic chunking generates chunks that are tightly focused on specific topics. This makes it easier for retrieval systems to match queries to the most relevant chunks, improving the precision of retrieval. Precise retrieval reduces the number of irrelevant chunks passed to the LLM. This saves tokens, minimizes noise, and ensures the LLM focuses only on information that directly answers the query.2.1.3. Minimized RedundancySemantic chunking reduces overlap and redundancy across chunks. While some overlap is necessary for preserving context, semantic chunking ensures this is deliberate and optimized, unlike fixed-length chunking, where overlaps are arbitrary and often wasteful. RAG pipelines often must deal with token constraints. Redundancy wastes valuable token space, while semantic chunking maximizes the information density of each chunk.3. Implementing Semantic Chunking3.1. Loading the dataset and setting up the API keyWe will use the dataset jamescalam/ai-arxiv2, which contains research papers on artificial intelligence. These papers are often long and contain distinct sections like abstracts, methodologies, experiments, and conclusions. Chunking this dataset using semantic methods ensures we preserve context within sections and facilitate efficient retrieval for downstream tasks like summarization or question answering.Snippet of the dataset "jamescalam/ai-arxiv2Semantic chunking stands out by splitting text based on meaning and context rather than arbitrary rules, ensuring each chunk is coherent and self-contained.One of the key tools for implementing semantic chunking is the semantic_router package.Among its core features, the semantic_router.splitters module is specifically designed for splitting text into meaningful chunks using cutting-edge semantic methods.The semantic_router.splitters module is central to the chunking functionality. It offers three key chunking methodsconsecutive_sim, cumulative_sim, and rolling_windoweach catering to different document structures and use cases.To use OpenAIs tools, you need an API key for authentication, which we securely load from a .env file using the dotenv library. This keeps your key safe and out of your code. The OpenAIEncoder is then initialized to convert text into embeddingsnumerical representations of meaning and context. These embeddings are crucial for semantic chunking, enabling us to measure similarity between text segments and create coherent chunks. Make sure your API key is set up in the .env file, and the encoder is configured with a model like text-embedding-ada-002 for efficient and accurate embedding generation. Below is the code for the same from datasets import load_datasetimport osfrom getpass import getpassfrom semantic_router.encoders import OpenAIEncoderfrom dotenv import load_dotenvimport sysimport openai#import the datadataset = load_dataset("jamescalam/ai-arxiv2", split= "train")#Securly loading the openai api key from a .env fileopenai.api_key = os.environ["OPENAI_API_KEY"]#The OpenAIEncoder is initialized for accurate embedding generationencoder = OpenAIEncoder(name= "text-embedding-3-small")The below code uses the RollingWindowSplitter from the semantic_router package to semantically chunk the dataset. The rolling window technique creates overlapping chunks to maintain context across boundaries, making it particularly effective for NLP tasks like retrieval-augmented generation (RAG).The rolling window splits text into chunks of a specified size (defined by window_size) with overlaps between adjacent chunks. This overlap helps preserve context from one chunk to the next, ensuring downstream models, such as large language models (LLMs), receive coherent input for processing.3.2. RollingWindowSplitter Parameter BreakdownencoderThe encoder generates embeddings for the text, representing its semantic meaning. These embeddings help measure similarity and guide chunking.dynamic_threshold = FalseWhat it Does: Disables automatic adjustment of the similarity threshold based on content. This means chunks will be determined solely by the fixed parameters (window_size, min_split_tokens, etc.).Best Practice: Use False when you have a clear idea of your thresholds or if the dataset is consistent in structure. Use True for varied or unstructured datasets.min_split_tokens = 100What it Does: Ensures each chunk contains at least 100 tokens. This prevents overly small, uninformative chunks.Best Practice: Set this based on the minimum amount of information required for your task.max_split_tokens = 500What it Does: Caps each chunk at 500 tokens to fit within token limits for downstream models (e.g., OpenAI models with token constraints).Best Practice: Match this value to your LLMs token limit, subtracting space for query tokens and prompts.window_size = 2What it Does: Specifies how many segments (e.g., sentences, paragraphs) to include in each chunk. Smaller windows produce tighter chunks; larger windows preserve more context but may include unrelated content.Best Practice: Adjust based on the granularity of your text (e.g., use 12 for short sentences, 35 for paragraphs).plot_splits = TrueWhat it Does: Visualizes the chunking process, showing how the text was divided into chunks. This is helpful for debugging and parameter tuning.enable_statistics = TrueWhat it Does: Outputs statistics about the chunking process, such as the number of chunks and their average size. This helps evaluate how well your chunking configuration performs.from semantic_router.splitters import RollingWindowSplitterfrom semantic_router.utils.logger import loggerlogger.setLevel("WARNING") # reduce logs from splitterencoder.score_threshold = config.score_threshold#Read the above description to understand best practices for parameterssplitter = RollingWindowSplitter( encoder=encoder, dynamic_threshold = False, min_split_tokens = 100, max_split_tokens = 500, window_size = 2, plot_splits = False, # set this to true to visualize chunking enable_statistics = False# to print chunking stats)splits = splitter([dataset["content"][0]])The build_chunk function combines a title and a content chunk into a formatted string, where the title is prefixed with a # (indicating a heading in markdown) followed by the content. This is useful for creating easily readable and structured outputs, particularly when chunking large datasets like research papers. In the example, the title is taken from the first document in the dataset, and the function is applied to the first three chunks from the splits. By looping through these chunks, it prints them as well-organized sections, helping users see how each chunk relates to the overall document title. This approach ensures clarity and structure, making the output more comprehensible for tasks like summarization or retrieval.def build_chunk(title: str, content: str): return f"# {title}\n{content}"# we use it like:title = dataset[0]["title"]for s in splits[:3]: print("---") print(build_chunk(title=title, content=s.content))The build_metadata function creates a structured metadata list for a document and its corresponding chunks. It starts by extracting document-level metadata like the ArXiv ID, title, and references, then iterates over the provided chunks (doc_splits) to assign each chunk its own metadata. For each chunk, it adds identifiers for the current chunk, the previous chunk (prechunk_id), and the next chunk (postchunk_id) to maintain contextual links without storing the full neighboring chunks, which helps save storage in systems like Pinecone. This metadata structure is particularly useful for indexing and retrieval tasks, as it combines chunk-level context with document-wide details for efficient querying and navigation.from semantic_router.schema import DocumentSplitdef build_metadata(doc: dict, doc_splits: list[DocumentSplit]): # get document level metadata first arxiv_id = doc["id"] title = doc["title"] refs = list(doc["references"].values()) # init split level metadata list metadata = [] for i, split in enumerate(doc_splits): # get neighboring chunks prechunk_id = "" if i == 0 else f"{arxiv_id}#{i-1}" postchunk_id = "" if i+1 == len(doc_splits) else f"{arxiv_id}#{i+1}" # create dict and append to metadata list metadata.append({ "id": f"{arxiv_id}#{i}", "title": title, "content": split.content, "prechunk_id": prechunk_id, "postchunk_id": postchunk_id, "arxiv_id": arxiv_id, "references": refs }) return metadatametadata = build_metadata( doc=dataset[0], doc_splits=splits[:3])Metadata structureThis code connects to a Pinecone instance, a vector database optimized for storing and retrieving embeddings, using an API key for authentication. It checks if the specified index (configured with config.chunk_index_name) already exists. If not, it creates a new index with a specified dimensionality (dims), which matches the size of the embeddings generated by the encoder. The index uses the dotproduct similarity metric for vector comparisons, and ServerlessSpec specifies the cloud and region (e.g., us-east-1). The code waits for the index to initialize before connecting and displaying its stats. This setup ensures that your embeddings can be stored, queried, and managed efficiently for downstream tasks like semantic search or retrieval.To use Pinecone, you first need an API key to authenticate your connection. Head over to Pinecones website and sign up for a free account. Once logged in, navigate to the API Keys section in the Pinecone dashboard. Here, youll find an automatically generated key or the option to create a new one. Copy the key and store it securely in your environment variables file (e.g., .env) as PINECONE_API_KEY. This ensures your key remains private and can be accessed by your code without hardcoding it directly, enhancing security while enabling seamless integration.from pinecone import Pinecone# initialize connection to pinecone (get API key at app.pinecone.io)api_key = os.environ["PINECONE_API_KEY"] # configure clientpc = Pinecone(api_key=api_key)from pinecone import ServerlessSpecspec = ServerlessSpec( cloud= config.chunk_cloud, region= config.chunk_region # us-east-1 is the free one. )dims = len(encoder(["some random text"])[0])import timeindex_name = config.chunk_index_name# check if index already exists (it shouldn't if this is first time)if index_name not in pc.list_indexes().names(): # if does not exist, create index pc.create_index( index_name, dimension=dims, # dimensionality of embed 3 from openai metric='dotproduct', spec=spec ) # wait for index to be initialized while not pc.describe_index(index_name).status['ready']: time.sleep(1)# connect to indexindex = pc.Index(index_name)time.sleep(1)# view index statsindex.describe_index_stats()This code processes the dataset in batches to create semantic chunks, embed them, and store the results in a Pinecone index. It first converts the dataset into a Pandas DataFrame (limited to 10,000 documents for efficiency) and prepares a list, full_dataset, to store all chunk metadata. The splitter is configured to suppress statistics and visual outputs for faster processing. For each document, the splitter generates chunks, and build_metadata adds identifiers and metadata. Batches of chunks (batch_size = 128) are then processed, with unique IDs assigned to each chunk, embeddings generated using the encoder, and all data uploaded to the Pinecone index using the upsert method. This approach ensures scalable and efficient processing, embedding, and storage for large datasets in Pinecone, suitable for retrieval-augmented generation and semantic search.from tqdm.auto import tqdm# easier to work with dataset as pandas dataframedata = dataset.to_pandas().iloc[:10000] # limit to 10k docs# store dataset *without* embeddings herefull_dataset = []batch_size = 128# adjust splitter to not display stats and visualssplitter.enable_statistics = Falsesplitter.plot_splits = Falsefor doc in tqdm(dataset): # create splits splits = splitter([doc["content"]]) # create IDs and metadata for all splits in doc metadata = build_metadata(doc=doc, doc_splits=splits) for i in range(0, len(splits), batch_size): i_end = min(len(splits), i+batch_size) # get batch of data metadata_batch = metadata[i:i_end] full_dataset.extend(metadata_batch) # generate unique ids for each chunk ids = [m["id"] for m in metadata_batch] # get text content to embed content = [ build_chunk( title=x["title"], content=x["content"] ) for x in metadata_batch ] # embed text embeds = encoder(content) # add to Pinecone index.upsert(vectors=zip(ids, embeds, metadata))The query function retrieves relevant chunks from the Pinecone index based on a user's input query (text), embedding it using the same encoder used during index creation to ensure consistency in the semantic space. The function searches the index for the top k matches, where top_k=5 retrieves the 5 most similar chunks to the query based on the similarity metric (e.g., dot product, which measures alignment in the embedding space). It includes metadata for each match (include_metadata=True), such as the title, content, and IDs for the preceding (prechunk_id) and following (postchunk_id) chunks. Neighboring chunks are fetched to provide additional context, appending up to 400 characters from their edges to the current chunk. Each result is then formatted with the document's title as a heading and enriched with context for coherence, ensuring that the query response is accurate, relevant, and easy to understand.def query(text: str): xq = encoder([text])[0] #We are using the same encoder that we used while creating the index here so that the query is plotted in the same space matches = index.query( vector=xq, top_k=5, #How many chunks to retrive include_metadata=True #Allows us to get the metadata ) chunks = [] for m in matches["matches"]: content = m["metadata"]["content"] title = m["metadata"]["title"] pre = m["metadata"]["prechunk_id"] post = m["metadata"]["postchunk_id"] other_chunks = index.fetch(ids=[pre, post])["vectors"] prechunk = other_chunks[pre]["metadata"]["content"] postchunk = other_chunks[post]["metadata"]["content"] chunk = f"""# {title} {prechunk[-400:]} {content} {postchunk[:400]}""" chunks.append(chunk) return chunksquery("what are large language models?")Retrieving 5 relevant chunks from the database to be fed into the LLM modelReferencesDesign and Evaluation of RAG Solutions: Chunk Processing. (n.d.). Retrieved from https://github.com/Azure-Samples/Design-and-evaluation-of-RAG-solutions/blob/main/3.-chunk-processing/README.mdSemantic Chunking for RAG: Better Context, Better Results. (n.d.). Retrieved from https://www.multimodal.dev/post/semantic-chunking-for-ragRetrieval-Augmented Generation with Azure AI Document Intelligence. (n.d.). Retrieved from https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept/retrieval-augmented-generationRAG Techniques: Semantic Chunking. (n.d.). Retrieved from https://github.com/NirDiamant/RAG_Techniques/blob/main/all_rag_techniques/semantic_chunking.ipynbRAG Best Practice with AI Search. (n.d.). Retrieved from https://techcommunity.microsoft.com/blog/azure-ai-services-blog/rag-best-practice-with-ai-search/4357711Chunking Strategies for Production-Grade RAG Applications. (n.d.). Retrieved from https://www.helicone.ai/blog/rag-chunking-strategiesChunking Techniques Every Developer Should Know for Enhanced RAG Applications. (n.d.). Retrieved from https://dev.to/pavanbelagatti/chunking-techniques-every-developer-should-know-for-enhanced-rag-applications-famA Deep-Dive into Chunking Strategy, Chunking Methods, and Precision in RAG Applications. (n.d.). Retrieved from https://www.superteams.ai/blog/a-deep-dive-into-chunking-strategy-chunking-methods-and-precision-in-rag-applicationsRAG Optimization: Use an LLM to Chunk Your Text Semantically. (n.d.). Retrieved from https://www.luminis.eu/blog/rag-optimisation-use-an-llm-to-chunk-your-text-semantically/Mastering RAG: Advanced Chunking Techniques for LLM Applications. (n.d.). Retrieved from https://www.galileo.ai/blog/mastering-rag-advanced-chunking-techniques-for-llm-applicationsJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentaires 0 Parts 140 Vue
  • WWW.IGN.COM
    Lenovo Kicks Off the New Year With Great Discounts on Legion Gaming PCs and Laptops
    Lenovo is kicking off the new year with some great deals on several of its Legion gaming PC and laptop models. Most of these configurations have coupon codes that are automatically applied in your shopping cart, plus you get free shipping. Check out all the deals below.Lenovo Gaming PCsLenovo Legion Tower 7i Intel Core i9-14900KF RTX 4080 Super Gaming PCLenovo Legion Tower 5 AMD Ryzen 7 7700 RTX 4070 Ti Super Gaming PCLenovo Legion Tower 5 AMD Ryzen 7 7700 RTX 4070 Super Gaming PCLenovo Legion Tower 5 Intel Core i5-14400F RTX 4060 Gaming PCLenovo Gaming LaptopsLenovo Legion Pro 7 16" Intel Core i9-14900HX RTX 4090 Gaming LaptopLenovo Slim 5 16" AMD Ryzen 7 8845HS RTX 4070 Gaming LaptopLenovo Legion Pro 5 16" Intel Core i7-14700HX RTX 4070 Gaming LaptopLenovo Slim 5 16" AMD Ryzen 7 8845HS RTX 4060 Gaming LaptopLenovo Legion Pro 5 16" AMD Ryzen 7 7745HX RTX 4060 Gaming LaptopLenovo LOQ 15" Intel Core i7-13650HX RTX 4060 Gaming LaptopYou can quickly browse through all of the listed products on sale above.Lenovo Legion Tower 7 RTX 4080 Super Gaming PCLenovo Legion Tower 7i Intel Core i9-14900KF RTX 4080 Super Gaming PCThe Legion Tower 7i RTX 4080 Super gaming PC is currently available for just over $2,200 shipped. The 4080 Super is a second generation card that supersedes the RTX 4080. It's Nvidia's second most powerful card and a superior GPU to AMD's Radeon RX 7900 RTX if you factor in DLSS and ray tracing performance. In our RTX 4080 Super review, Jacqueline Thomas writes, "If you're in the market for a 4K graphics card around $1,000, it's hard to think of any GPU that's a better purchase right now."Lenovo Legion Tower 5 RTX 4070 Ti Super Gaming PCLenovo Legion Tower 5 AMD Ryzen 7 7700 RTX 4070 Ti Super Gaming PCThis Legion Tower 5 gaming PC is equipped with an RTX 4070 Ti Super GPU for under $1,600. The RTX 4070 Ti Super is a second gen RTX 40 series GPU; it's about 10% more powerful than the RTX 4070 Ti and 15-25% faster than the RTX 4070 Super. This is a great card for both 1440p and 4K gaming. The VRAM count is upped to 16GB compared the the RTX 4070 Ti's 12GB, which means it has the same amount of VRAM as the RTX 4080 Super. With this level of GPU power, enabling ray tracing is also a very real possibility in many games.Lenovo Legion Tower 5 i7 RTX 4070 Super Gaming PCLenovo Legion Tower 5 AMD Ryzen 7 7700 RTX 4070 Super Gaming PCThis Legion Tower 5 gaming PC features an RTX 4070 GPU for $1,425. The RTX 4070 Super is one of the best graphics cards for most people; it's affordably priced and yet it's powerful enough to run games at up to 4K (although 1440p is its sweet spot). In her RTX 4070 Super review, Jacqueline Thomas wrote that "the Nvidia GeForce RTX 4070 Super is the mid-range card we should have got back when the RTX 4000 generation launched. It's around 16% faster in most games at the same price. If you've been waiting to make your GPU upgrade, now's the time."Lenovo Legion Pro 7 16" RTX 4090 Gaming LaptopLenovo Legion Pro 7 16" Intel Core i9-14900HX RTX 4090 Gaming LaptopIf you're looking for an incredibly powerful gaming desktop replacement, this laptop is a strong contender. The Lenovo Legion Pro 7 gaming laptop features a 16" QHD+ display, Intel Core i9-14900HX CPU, RTX 4090 GPU, 32GB of RAM, and a 2TB SSD. The Intel Core i9-14900HX is Intel's most powerful CPU and excels at both gaming and workstation duties. The RTX 4090 is still the most powerful mobile GPU on the market, and by a very substantial margin. It's roughly equivalent to a desktop RTX 3090 GPU. It has more than enough power to run most games at well over 60fps on the 2560x1600 display. In fact, it could probably hit 240fps (the display has a 240Hz refresh rate).Lenovo Legion Pro 7 16" RTX 4070 Gaming LaptopLenovo Legion Pro 5 16" Intel Core i7-14700HX RTX 4070 Gaming LaptopThis Lenovo Legion Pro 7 gaming laptop offers a lot of upgrades for a very reasonable price. Unlike most laptops at this price point, it's equipped with an Intel Core i7-14700X, RTX 4070 GPU with the maximum TGP rating, 16GB of DDR5 RAM and a 1TB SSD. The RTX 4070 mobile GPU performs on par with the RTX 3080 and should provide ample horsepower to power games on the QHD+ display.Lenovo Legion Pro 5 RTX 4060 Gaming LaptopLenovo Legion Pro 5 16" AMD Ryzen 7 7745HX RTX 4060 Gaming LaptopThis Lenovo Legion Pro 5 gaming laptop features a 16" QHD+ display, a fully-powered AMD Ryzen 7 7745HX CPU, RTX 4060 GPU, 16GB of RAM, and a 1TB SSD. Performance-wise, the RTX 4060 mobile GPU sits right in between the RTX 3070 and RTX 3070 Ti. It should run most games smoothly on the QHD+ display, especially with DLSS enabled.Lenovo Legion Slim 16" Gaming LaptopsLenovo Slim 5 16" AMD Ryzen 7 8845HS RTX 4070 Gaming LaptopLenovo Slim 5 16" AMD Ryzen 7 8845HS RTX 4060 Gaming LaptopThe Lenovo Slim 5 Gen 9 is a current generation 2024 model. It weighs in at only 4.63 pounds and measures 0.78" at its thinnest point. The Legion Slim model is thinner than your typical gaming laptop, and yet is still equipped with a GPU that has the same high-wattage TGP as the full-sized Legion Pro.Why Choose Lenovo?Lenovo Legion gaming PCs and laptops generally feature better and more rugged build quality than what you'd find from most other prebuilt PCs. For desktop PCs in particular, people like the fact that Lenovo does not use many proprietary components in their rigs, so the PCs are much easier to upgrade with easiy obtainable, off-the-shelf parts. For laptops, Lenovo generally does not throttle the GPU on most of their Legion laptops, so you should expect maximum performance from a given GPU. Lenovo generally includes a solid one-year warranty with the option to extend.Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Commentaires 0 Parts 145 Vue
  • WWW.DENOFGEEK.COM
    SNL50 Documentary Reveals The Full Story of a Classic Sketch
    Try as it might, science has yet been able to answer one of humankinds most pressing questions: What is the best Saturday Night Live sketch of all time?Even the most ardent SNL fans will acknowledge that the long-running TV institution has turned in more dud sketches than classics over the years. Thats just to be expected from a show that churns out around 200 sketches a season for 50 seasons. Still, when a good SNL sketch hits, it really hits. And among those good sketches deemed classics, one has to stand out as the best of the best. In the third episode of the new Peacock docuseries SNL50: Beyond Saturday Night, producers decide to hone in on one candidate. Folks, I have a fever. And the only prescription is you watching the More Cowbell sketch right now. First airing on April 8, 2000 as part of season 25 episode 16 Christopher Walken/Christina Aguilera, the sketch that came to be known as More Cowbell depicts a fictional version of the band Blue yster Cult as they struggle to incorporate the correct amount of cowbell into their hit song Dont Fear the Reaper. The sketch not only proved itself to be an immediate hit but also achieved an impressive staying power years later, with fans constantly annoying host Christopher Walken by requesting more cowbell.Featuring interviews with the involved cast members (with one notable Horatio Sanz-sized exception), SNL employees, and even members of Blue yster Cult, SNL: Beyond Saturday Night episode 3 More Cowbell presents one of the deepest looks at an SNL sketch ever presented. In the process we learn quite a bit about what goes into the making of a classic. Here are the most important things to know about More Cowbell.More Cowbell made it all but impossible to listen to the song Dont Fear the Reaper without hearing the consistent percussion rumbling away in the background. The writer and star of the sketch, however, picked it up early on. In SNL50: Beyond Saturday Night, Will Ferrell describes his memories of listening to the song as a kid on the car radio and being struck by the cowbell. Its the perfect calibration of loud enough but not too loud. Its really kind of impotent in the background. I had the thought, even as a kid, what is the life of the guy playing the cowbell?That memory led him to place Dont Fear the Reaper on an idea board in his SNL office and eventually develop the final concept. An early version of the sketch was intended to be performed as part of an episode hosted by Norm Macdonald but it was eventually cut. It got a second chance during the April 8, 2000 episode that featured Christopher Walkens fourth time hosting. The Original Name of the Sketch Was Recording SessionLongtime SNL aficionados (and regular listeners of The Lonely Island and Seth Meyers Podcast) know that SNL sketches are often given deliberately boring names. Thats because, before the writer of a sketch gets the opportunity to make an audience laugh, they have to make their writing peers laugh at the table-read. Spoiling the joke in the title is antithetical to that mission. Thats why the sketch we now know as More Cowbell is technically officially titled Recording Session.The Cowbell Was Almost a WoodblockIn addition to receiving a fake-out name, the early drafts of More Cowbell had one major differencethere was no cowbell to be seen or heard! Instead, the instrument played by Will Ferrells Gene Frenkle was a woodblock as thats what Ferrell originally interpreted it as. And funnily enough, he may have had it right the first time. No one involved in the recording of the Blue yster Cult song can remember for sure who played the instrument on the track and their opinions differ on whether it was a cowbell or a woodblock. Gene Frenkle Isnt Real But Bruce Dickinson IsSpeaking of Blue yster Cult, the band does not have a dedicated cowbell/woodblock player named Gene Frenkle. While that is purely a Will Ferrell invention, his Gene Frenkle does look like BC lead vocalist Eric Bloom. In fact, all of the cast resembles actual Blue yster Cult members as seen on the cover of a compilation album, just playing the wrong instruments. Chris Kattan is playing lead guitar like Buck Dharma but is dressed like drummer Albert Bouchard. Jimmy Fallon is playing drums like Bouchard but is dressed like guitarist/keyboardist Allen Lanier. Similarly, Christopher Walkens producer character, Bruce Dickinson is a real person, but he had nothing to do with the original production of Dont Fear the Reaper. Instead, he was a manager at Columbia Records whose name was listed as reissue producer on Blue yster Cults greatest hits compilation. Its kind of a funny-sounding name. That was the extent of my research: the back of a CD cover, Ferrell says in the doc. Join our mailing listGet the best of Den of Geek delivered right to your inbox!The Dress Rehearsal Version Was FlatBefore the final version of the show airs live on Saturday night, SNL does a dress rehearsal of its planned sketches in front of a live test audience. The dress rehearsal for More Cowbell didnt indicate in any way that the sketch would go on to become a hit. I think it was kind offine, Ferrell says with a grimace in the doc. Walken opted for a more subdued Bruce Dickinson performance in the dress rehearsal, undoubtedly saving his energy for the aired show. Ferrell was also noticeably less physical. Additionally, the sketch was slotted for Stage 1 in Studio 8H, which is known among the cast as Shit-Can Alley, The Death Corner, and Coffin Corner due to its positioning to the far left of the audiences eyeline. All in all, it was looking fairly bleak for More Cowbell. But the end restful was an instant classic, due in no small part to Walkens amped up performance. Christopher Walken, for air, upped his game. It was almost like he was doing an impersonation of Christopher Walken, Jimmy Fallon says. The Best of Will Ferrell DVD Gave Cowbell (and Cowbells) a Second LifeIts hard to even conceive of it now, but back in 2000 Saturday Night Live faced the same predicament that most television did. Outside of the lucky few who taped it live, it was impossible to rewatch an episode on-demand. Thats why the More Cowbell sketch didnt fully take off until the release of Saturday Night Live: The Best of Will Ferrell DVD in 2002. Indeed thats where I first watched it.In the doc, Ferrell and the rest of the cast discuss the explosion of popularity of the sketch following the release of the DVD. The sketch played on the video board before Mississippi State football games and Ferrell got to play cowbell live with bands like Red Hot Chili Peppers and Queens of the Stone Age. The biggest fan of More Cowbell, however, may have been cowbells themselves. SNL50: Beyond Saturday Night catches up with Ranco Cowbells owner John Karpi to discuss his pride at the companys product being used as the featured cowboy in the sketch. To be a manufacturer of cowbells at that time. Wow, we were superstars. All four episodes of SNL: Beyond Saturday Night are available to stream on Peacock now.
    0 Commentaires 0 Parts 169 Vue
  • 9TO5MAC.COM
    Rumor Replay: Apples big 2025 product release plans
    This is Rumor Replay, a weekly column at 9to5Mac offering a quick rundown of the most recent Apple product rumors, with analysis and commentary. Today: Apples big 2025 product release plans, including iPhone 17 Pro camera upgrades, 17 Air thinness, and more. Here are this weeks Apple rumors.iPhone 17 Pro camera trio all goes 48MPLeaker Digital Chat Station shared this past week about camera changes to expect on the iPhone 17 lineup. They confirmed what wed previously heard about all iPhone 17 models getting 24MP selfie camerasan upgrade from 12MP.New, however, was the claim that both the iPhone 17 Pro and Pro Max will have 48MP sensors for all three of their rear cameras. The main camera went 48MP on the iPhone 15 Pro, the Ultra Wide followed suit on the 16 Pro, and now it sounds like the 17 Pro will complete the trio by upgrading the 17 Pros Telephoto lens to 48MP too.My takeawaysIt seems like Apples plan is to lean hard in two opposite directions with its flagship iPhone 17 models.iPhone 17 Air will sacrifice the Ultra Wide and Telephoto lenses for the sake of thinness and keeping costs lower.iPhone 17 Pro and Pro Max, however, will push their camera advantages by boasting Apples first ever trio of 48MP lenses. I wouldnt be surprised to see more key upgrades to the Pro lines cameras this year to further differentiate them from the Air.iPhone 17 Air will boast 5.5mm frameSpeaking of the iPhone 17 Air, Ming-Chi Kuo reported this week that the highly anticipated device would have its thinnest part around 5.5mm.This is substantially thinner than the 6.25mm we had previously heard, and brings the Air into M4 iPad Pro territory. The phrasing also calls into question what thinnest part really means.My takeawaysEver since Kuos report came to light, Ive been thinking about the possibility of a tapered design for the iPhone 17 Air.It would pay homage to the MacBook Airs classic and beloved look. It would also provide Apple the opportunity to boast about the iPhones ultra-thin body while conveniently ignoring the fact that much of the device is a bit thicker.A tapered design could set the iPhone 17 Air apart even more than expected, putting even more pressure on the 17 Pro to win shoppers over.Apples product release plans for 2025Over the weekend, Mark Gurman shared a rundown of Apples product release plans for the year ahead.Some of the highlights include:No Vision Pro 2 in 2025Apples new HomePad arriving slightly later than the original March projectionApple Watch SE sporting a new look when it arrives this falliPad 11 likely coming with an A17 Pro chipand new Apple TV 4K and HomePod mini 2 release timingMy takeawaysGurman has the best Apple sources, so I dont doubt any of what he shared here. That said, some of his reporting can intentionally be left open for interpretation.For example, it sounds like the HomePads delay will be extremely minor. He says it depends on new Siri features and app intents, which are coming in iOS 18.4 this April. But lots of second-hand reporting latched on to Gurmans mention of iOS 19 to indicate the device had been delayed significantly.And with Vision Pro 2, the devices absence doesnt sound like a certainty. Gurmans exact words are as follows:As of now, I dont believe there will be a new headset from Apple shipping this year, though there theoretically could be an unveiling ahead of a release later.If Apple ends up shipping a minor Vision Pro update with the M5 chip, as Ming-Chi Kuo previously reported, it wouldnt necessarily require a lot of effort from the company or its designers. Thus, Apple could ship it in 2025 with minimal leak-worthy rumblings inside the company.Reading between the lines, I think Gurman may recognize this too, so hes less definitive on the claim that a new Vision Pro isnt coming. But Ill be interested to see if things change, especially if Apple has a big visionOS 3 story at WWDC.Which Apple 2025 products are most interesting to you? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentaires 0 Parts 159 Vue
  • FUTURISM.COM
    Elon Musk Throws Tantrum After Being Exposed by Gamers, Leaks YouTuber's DMs and Seemingly Takes Away His Checkmark
    Professional manchild and Tesla billionaire Elon Musk is deeply unhappy that he's been caught faking clout and now he's throwing a tantrum, taking away the toys of other tykes who were mean to him.And no, this isn't some far-fetched analogy. Earlier this month, Musk compellingly accused of masquerading as a top-level player of the recently released free-to-play action RPG Path of Exile 2 (PoE 2)by paying a skilled gamer to level up a character to the top game's top echelons.During a January 7 livestream on X-formerly-Twitter, the platform's owner struggled with the very basics of the game, failing to even reliably click certain UI elements. In other words, there was no chance this was the same gamer who'd been controlling a character that would've required many hundreds of hours of grinding to level up.Then the character mysteriously perma-died, sparking conspiracy theories that Musk had killed it due to his own incompetence.Some of the game's most well-known players mocked his ruse extensively on social media, calling him out for a bizarre "ego-trip."Now, Musk has seemingly had enough of being made fun of. In an apparent attempt to silence his critics, the richest man in the world took aim at American YouTuber Zack Hoyt, a streamer who goes by the handle Asmongold.Earlier this week, Hoyt published a 32-minute video on YouTube, discussing how Musk won't be "getting away" with parading as a top-level gamer one of countless takedowns of the billionaire's childish antics on the video platform.Days later, Musk took to X-formerly-Twitter, sharing screenshots of DMs between him and the controversial streamer, a privacy-invading practice that as a Community Note appended to his tantrum points out is "generally against the platform's Terms of Service (ToS)."Hoyt's blue check mark on the platform a symbol that used to show someone's identity had been verified, but which Musk degraded to something anyone can get for a monthly fee, with disastrous results also vanished.Musk's argument that Hoyt is a mere pawn who has to "ask his boss for permission before he can do anything" takes some mental gymnastics to understand."Who are these mysterious editors," Musk messaged Hoyt.Hoyt was happy to oblige, informing Musk of who his two editors were, which is easily obtainable and public information."Interesting," Musk said, seemingly believing that he had exposed a conspiracy:that a popular YouTuber works with other people to edit and produce his popular videos.Other X users were puzzled by Musk's lashing out."So these DMs prove what exactly?" one user replied. "Obviously Asmon hired those editors and gave them agency over his YT because they know what they are doing, and it obviously works given how good his YT is running.""Elon, his editors work for him," another user chimed in. "Not the other way around."The Reddit community was quick to point out Musk's glaring double standard."Elon is so stupid," one user wrote. "Does Elon run the whole of Tesla, Space X, X, and everything else on his own? No.""He thinks they are like newspaper editors that control the content and what he says," a different Redditor posited."Oh god its just stupid enough to be true," another user replied.Meanwhile, Musk took to X-formerly-Twitter to seemingly try to shut down the rumors.During a Wednesday stream, Musk claimed that the level 97 PoE 2 account was being controlled by a "Chinese driver" whose name is "Yilongma.""I rely on him for everything," he said during the stream.While it seems like Musk was doubling down on the claim that he was behind the top-level character by conjuring a Chinese name that sounds an awful lot like his own, we can't tell if he's joking or not.In other words, it's a classic ruse to wriggle out of controversy by piling on confusion exactly the kind of thing a child would do post-tantrum.Share This Article
    0 Commentaires 0 Parts 159 Vue
  • THEHACKERNEWS.COM
    Researchers Find Exploit Allowing NTLMv1 Despite Active Directory Restrictions
    Jan 16, 2025Ravie LakshmananActive Directory / VulnerabilityCybersecurity researchers have found that the Microsoft Active Directory Group Policy that's designed to disable NT LAN Manager (NTLM) v1 can be trivially bypassed by a misconfiguration."A simple misconfiguration in on-premise applications can override the Group Policy, effectively negating the Group Policy designed to stop NTLMv1 authentications," Silverfort researcher Dor Segal said in a report shared with The Hacker News.NTLM is a still widely used mechanism particularly in Windows environments to authenticate users across a network. The legacy protocol, while not removed due to backward compatibility requirements, has been deprecated as of mid 2024.Late last year, Microsoft officially removed NTLMv1 starting in Windows 11, version 24H2, and Windows Server 2025. While NTLMv2 introduces new mitigations to make it harder to perform relay attacks, the technology has been besieged by several security weaknesses that have been actively exploited by threat actors to access sensitive data.In exploiting these flaws, the idea is to coerce a victim to authenticate to an arbitrary endpoint, or relay the authentication information against a susceptible target and perform malicious actions on behalf of the victim."The Group Policy mechanism is Microsoft's solution to disable NTLMv1 across the network," Segal explained. "The LMCompatibilityLevel registry key prevents the Domain Controllers from evaluating NTLMv1 messages and returns a wrong password error (0xC000006A) when authenticating with NTLMv1."However, Silverfort's investigation found that it's possible to circumvent the Group Policy and still use NTLMv1 authentication by taking advantage of a setting in the Netlogon Remote Protocol (MS-NRPC).Specifically, it leverages a data structure called NETLOGON_LOGON_IDENTITY_INFO, which contains a field named ParameterControl that, in turn, has a configuration to "Allow NTLMv1 authentication (MS-NLMP) when only NTLMv2 (NTLM) is allowed.""This research shows on-prem applications can be configured to enable NTLMv1, negating the Highest Level of the Group Policy LAN Manager authentication level set in Active Directory," Segal said."Meaning, organizations think they are doing the right thing by setting this group policy, but it's still being bypassed by the misconfigured application."To mitigate the risk posed by NTLMv1, it's essential to enable audit logs for all NTLM authentication in the domain and keep an eye out for vulnerable applications that request clients to use NTLMv1 messages. It also goes without saying that organizations are recommended to keep their systems up-to-date.The disclosure comes as HN Security researcher Alessandro Iandoli detailed how various security features in Windows 11 (prior to version 24H2) could be bypassed to achieve arbitrary code execution at the kernel level.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentaires 0 Parts 185 Vue
  • WWW.INFORMATIONWEEK.COM
    What Does Biden's New Executive Order Mean for Cybersecurity?
    Carrie Pallardy, Contributing ReporterJanuary 16, 20255 Min ReadPresident Joe Biden meets with White House staff in the Oval Office, 2022, to review remarks he will give at an executive order signing. (Official White House Photo by Adam Schultz) American Photo Archive via Alamy Stock PhotoOn. Jan. 16, just days before leaving office, President Biden issued an executive order on improving the nations cybersecurity. The extensive order comes on the heels of the breaches of US Treasury and US telecommunications providers perpetrated by China state-sponsored threat actors.Adversarial countries and criminals continue to conduct cyber campaigns targeting the United States and Americans, with the Peoples Republic of China presenting the most active and persistent cyber threat to United States Government, private sector, and critical infrastructure networks, the order states.This new executive order, building on the one Biden issued in 2021, is extensive. It addresses issues ranging from third-party supply chain risks and AI to cybersecurity in space and the risks of quantum computers.Could this executive order shape the federal governments approach to cybersecurity? And how uncertain is its impact under the incoming Trump administration?The Executive OrderThe executive order outlines a broad set of initiatives to address nation state threats, improve defense of the nations digital infrastructure, drive accountability for software and cloud providers, and promote innovation in cybersecurity.Like the 2021 executive order, the newly released order emphasizes the importance of collaboration with the private sector.Related:Since it's an executive order, it's mainly aimed at the federal government. It doesn't directly regulate the private sector, Jim Dempsey, managing director of the Cybersecurity Law Center at nonprofit International Association of Privacy Professionals (IAPP), tells InformationWeek. It indirectly aims to impact private sector cybersecurity by using the government's procurement power.For example, the order directs software vendors working with the federal government to submit machine-readable secure software development attestations through the Cybersecurity and Infrastructure Security Agency (CISA) Repository for Software Attestation and Artifacts (RSAA).If CISA finds that attestations are incomplete or artifacts are insufficient for validating the attestations, the Director of CISA shall notify the software provider and the contracting agency, according to the order.The order also calls for the development of guidelines relating to the secure management of cloud service providers access tokens and cryptographic keys. In 2023, China-backed threat actor stole a cryptographic key, which led to the breach of several government agency Outlook email systems, Wired reports. A stolen key was behind the compromise of BeyondTrust that led to the recent US Treasury breach.Related:AI, unsurprisingly, doesnt go untouched by the order. It delves into establishing a program for leveraging AI models for cyber defense.The Biden administration also uses the executive order to call attention to cybersecurity threats that may loom larger in the future. The order points to the risks posed by quantum computers and space system cybersecurity concerns.Bidens Cyber LegacyThe Biden Administration made cybersecurity a priority. In addition to the 2021 executive order on cybersecurity, the administration released a National Cybersecurity Strategy and an implementation plan in 2023.The current administration also took sector-specific actions to bolster cybersecurity. For example, Biden issued an executive order focused on maritime cybersecurity.Kevin Orr, president of RSA Federal at RSA Security, a network security company, saw a positive response to the Biden Administrations efforts to improve cybersecurity within the government.I was surprised at how many agencies have leaned in the last 18 months, especially within the intelligence community, have really adopted basic identity proofing, coming forward with multifactor authentication, and really strengthening their defenses, Orr shares.Related:While the Biden Administration has worked to further cybersecurity, there are questions about adoption of new policies and best practices. Some stakeholders call for more regulatory enforcement.Much like any regulation, people are only going to follow it if there's some type of regulatory teeth to it, Joe Nicastro, field CTO at software security firm Legit Security, argues.Others argue for incentives are more likely to drive adoption of cybersecurity measures.Cybersecurity is an ongoing national security concern, and the Biden administration is soon passing the torch.I think this administration can leave extremely, extremely proud, says Dempsey. Certainly, they are handing over the nations cybersecurity to the incoming Trump administration in far better shape than it was four years ago.A New AdministrationWhile the order could mean big changes in the federal governments approach to cybersecurity, the timing makes its ultimate impact uncertain. Many of its directives for federal agencies have a long runway, months or years, for compliance. Will the Trump administration enforce the executive order?Cybersecurity has largely been painted as a bipartisan issue. And there has been some continuity between the first Trump Administration and the Biden Administration when it comes to cyber policies.For example, the Justice Department recently issued a final rule on Bidens Executive Order 14117 Preventing Access to Americans Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. That order charges the Justice Department with establishing a regulatory program to prevent the sale of Americans sensitive data to China, Russia, Iran, and other foreign adversaries. That order and subsequent ruling stem from an executive order signed by Trump in 2019.Bidens 2025 cybersecurity executive order puts a spotlight on cyber threats from China, and President-Elect Trump has been vocal about his intention to crack down on those threats. But that does not preclude changes to or dismissal of provisions in Bidens final cybersecurity executive order.There may be some things that the incoming administration will ignore or deprioritize. I'd be a little surprised if they repealed the order, says Dempsey.CISA was a major player in the Biden administrations approach to cybersecurity, and it will continue to play a big role if this new executive order rolls out as outlined. But the federal agency has been criticized by several Republican lawmakers. Some have called to limit its power or even shut it down, AP News reports.The incoming Trump administration is also expected to take a more hands-off approach to regulation in many areas. Critical infrastructure is consistently at the heart of national cybersecurity conversations, and the majority of critical infrastructure is owned by the private sector.In terms of new regulation aimed at the private sector, I think we probably will not see anything out of the Trump administration, Dempsey predicts.Cybersecurity policy could look different under the Trump administration, but it is likely it will remain at the forefront of national security discussions.I'm hoping that threat of what China is doing with their cybersecurity programs and how they're facilitating attacks against BeyondTrust and US treasury et cetera, will help continue the progress that we've made within cybersecurity, says Nicastro.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Commentaires 0 Parts 194 Vue
  • SCREENCRUSH.COM
    Where to Watch All of David Lynchs Movies Online
    When David Lynch died this week at the age of 78, the world lost one of the greatest filmmakers in the history of the medium. But Lynchs filmography is not going anywhere.His ten feature films andone TV series (two TV series? One big multipart movie?Twin Peaks is impossible to define)are widely available at home, both on streaming and on demand. Most are also available in a variety of top-notch physical editions on Blu-ray, DVD, and 4K.Lynch made movies for the big screen, of course, but his filmsplay surprisingly well at home too.While his grandly bizarre visuals are best appreciated in a theater,his surreal ideas and complex themes always reward repeat viewings. In that regard, David Lynch is one of the great directors of our streaming video era.Mulholland Driveis different every single time you watch it. The same goes forLost HighwayorEraserheador even Dune.Twin Peaks on an endless loop (except perhaps a concerned spouse).If youre looking to do a Lynch deep dive inhonor of the late masterspassing, heres where you can currently find all of his features, plus Twin Peaks. (Spoiler alert: If thats your intention, you better haveeither Max or the Criterion Channel, or a plan to sign up for one or both of them. Thats where most of these titles live right now.)Where to Watch David Lynch MoviesThe late David Lynch made ten features, plus one of the greatest TV shows in history. Heres where you can find them at home.READ MORE: Once-Beloved Restaurants That No Longer ExistsGet our free mobile app25 Movie Turning 25 In 2025These 25 titles all turn 25 years old in the year 2025. Time is not a flat circle. It just never stops moving.
    0 Commentaires 0 Parts 186 Vue
  • WEWORKREMOTELY.COM
    HubSpot: Sales Manager - Mid Market (Australia)
    Whats the role?As a Sales Manager, you will be responsible for hiring, training, coaching, and leading a team of new and established representatives in a fast-paced and rapidly changing environment. We are looking for someone who is passionate about transforming sales, service and marketing and is a team builder and coaches sellers - You not only motivate individuals but a team towards a collective vision and manifest a team atmosphere.What will I do?Attract, recruit and retain top talentSupport salespeople in all aspects of the sales process and keep their team members accountable to KPIs that drive the business growthHave excellent time management and organizational skillsHave an analytical mindset and leverage data across all interactionsContinue to build a track recording of coaching reps for successEffectively communicate and motivate across multiple mediums both externally and internallySupport the business in cross-functional projects to drive organizational advancementWho are you:At least 4-5 years of leading and coaching a quota carrying team in either Mid-Market Segments or similarExperience in successfully mentoring and/or leading others effectivelyPassionate about transforming sales, service and marketing and whose values align with HubSpots culture.High emotional intelligence - you have genuine empathy for others, and maximize your impact through understanding the motivations of your team, and adapting your communication accordinglyA positive change agent - you have a track record of leading and empowering groups towards driving improvement while navigating change and simultaneously winning. You create a culture of transparency and focused improvement while having fun and fostering a strong team environment.Committed to overachievement - you have a never quit attitude, and get buy-in to overachieve against targets regardless of the adversity being faced.Data-driven - you leverage and can communicate using data to improve core KPIs that matter to the individuals on our team and to help drive HubSpots strategic plays.Experience managing or being managed in a structured sales environment. These could include managing via a sales methodology, a forecast methodology, and structured deal management by sales stage.Uses good judgment - especially when tasked with difficult decisions. You are a person who exudes good judgment in decision making.Accountable - you have honest, transparent, and authentic communication with your colleagues, regardless of personal ramifications.A team builder and coaches reps - you not only motivate individuals but a team towards a collective vision and manifest a team atmosphere.What are the benefits?Generous remuneration + uncapped commissions + HubSpot RSUsWork From Home options available. You choose Home, Office or Flex!World Class New Hire TrainingESPP - so that you can share in HubSpots future successAn education allowance up to USD$5,000 per annumUnlimited Time Off PolicyPrivate health insurance allowanceFree books programAnnual fitness reimbursementFive-year sabbatical - Paid 4 week sabbaticalA clear career advancement path, with potential for global mobility opportunitiesPrimary Caregiver Leave of 16 weeks and Secondary Caregiver Leave for 6 weeks
    0 Commentaires 0 Parts 186 Vue