0 Comments
·0 Shares
·27 Views
-
Netflix adds one-click full-season downloads on iPhones and iPadswww.theverge.comNetflix users on iOS no longer need to download episodes of their favorite shows one by one. The streamer just introduced a way to download an entire season of a TV show with one click on iPhones and iPads.Netflix first let users download content to watch offline in 2016 and, over time, has added features to make it easier to download shows and movies or to delete downloads so you can free up space. In 2021, for example, it added the option for users to have the Netflix app automatically download shows and movies it predicted theyd like based on their viewing history.Android users already had access to the one-click season download feature, but heres how to use it if youre new to it:Open a shows display page.Next to the share button, click the download season button.All of the episodes from that season should be downloaded, and you can manage episodes under the the downloads section of the My Netflix tab.Thats it! Now youre ready for your long-distance flight.Read the full story at The Verge.0 Comments ·0 Shares ·27 Views
-
Optimization Using FP4 Quantization For Ultra-Low Precision Language Model Trainingwww.marktechpost.comLarge Language Models (LLMs) have emerged as transformative tools in research and industry, with their performance directly correlating to model size. However, training these massive models presents significant challenges, related to computational resources, time, and cost. The training process for state-of-the-art models like Llama 3 405B requires extensive hardware infrastructure, utilizing up to 16,000 H100 GPUs over 54 days. Similarly, models like GPT-4, estimated to have one trillion parameters, demand extraordinary computational power. These resource requirements create barriers to entry and development in the field, highlighting the critical need for more efficient training methodologies for advancing LLM technology while reducing the associated computational burden.Various approaches have been explored to address the computational challenges in LLM training and inference. Mixed Precision Training has been widely adopted to accelerate model training while maintaining accuracy, initially focusing on CNNs and DNNs before expanding to LLMs. For inference optimization, Post-Training Quantization (PTQ) and Quantization Aware Training (QAT) have achieved significant compression using 4-bit, 2-bit, and even 1-bit quantization. While differentiable quantization techniques have been proposed using learnable parameters updated through backpropagation, they face limitations in handling activation outliers effectively. Existing solutions for managing outliers depend on offline pre-processing methods, making them impractical for direct application in training scenarios.Researchers from the University of Science and Technology of China, Microsoft SIGMA Team, and Microsoft Research Asia have proposed a framework for training language models using the FP4 format, marking the first comprehensive validation of this ultra-low precision representation. The framework addresses quantization errors through two key innovations:A differentiable quantization estimator for weights that enhances gradient updates in FP4 computations by incorporating correction termsAn outlier handling mechanism for activations that combines clamping with a sparse auxiliary matrix.These techniques help to maintain model performance while enabling efficient training in ultra-low precision formats, representing a significant advancement in efficient LLM training.The framework primarily targets General Matrix Multiplication (GeMM) operations, containing over 95% of LLM training computations. The architecture implements 4-bit quantization for GeMM operations using distinct quantization approaches: token-wise quantization for activation tensors and channel-wise quantization for weight tensors. Due to hardware limitations, the systems performance is validated using Nvidia H-series GPUs FP8 Tensor Cores, which can accurately simulate FP4s dynamic range. The framework employs FP8 gradient communication and a mixed-precision Adam optimizer for memory efficiency. The system was validated using the LLaMA 2 architecture, trained from scratch on the DCLM dataset, with carefully tuned hyperparameters including a warm-up and cosine decay learning rate schedule, and specific parameters for the FP4 methods unique components.The proposed FP4 training framework shows that training curves for LLaMA models of 1.3B, 7B, and 13B parameters have similar patterns between FP4 and BF16 implementations, with FP4 showing marginally higher training losses: 2.55 vs. 2.49 (1.3B), 2.17 vs. 2.07 (7B), and 1.97 vs. 1.88 (13B) after 100B tokens of training. Zero-shot evaluations across diverse downstream tasks, including Arc, BoolQ, HellaSwag, LogiQA, PiQA, SciQ, OpenbookQA, and Lambada, reveal that FP4-trained models achieve competitive or occasionally superior performance compared to their BF16 counterparts. The results demonstrate that larger models achieve higher accuracy, validating the scalability of the FP4 training approach.In conclusion, researchers have successfully developed and validated the first FP4 pretraining framework for LLMs, marking a significant advancement in ultra-low-precision computing. The framework achieves performance comparable to higher-precision formats across various model scales through innovative solutions like the differentiable gradient estimator and outlier compensation mechanism. However, the current implementation faces a notable limitation: the lack of dedicated FP4 Tensor Cores in existing hardware necessitates simulation-based testing, which introduces computational overhead and prevents direct measurement of potential efficiency gains. This limitation underscores the need for hardware advancement to realize the benefits of FP4 computation.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our70k+ ML SubReddit. Sajjad Ansari+ postsSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner. [Recommended] Join Our Telegram Channel0 Comments ·0 Shares ·28 Views
-
Achieving General Intelligence (AGI) and Super Intelligence (ASI): Pathways, Uncertainties, and Ethical Concernstowardsai.netLatestMachine LearningAchieving General Intelligence (AGI) and Super Intelligence (ASI): Pathways, Uncertainties, and Ethical Concerns 0 like January 29, 2025Share this postAuthor(s): Mohit Sewak, Ph.D. Originally published on Towards AI. Artificial Super Intelligence (ASI): The Research Frontiers to Achieve AGI to ASI and the Challenges for HumanityExamining the Latest Research Advancements and Their Implications for ASI DevelopmentThe Research Frontiers to Achieve AGI to ASIIntroduction: A Cup of Tea with the FuturePicture this: Im sipping my favorite masala tea, pondering humanitys most mind-bending question: Can machines ever surpass us in intelligence? This isnt idle curiosity its the question that keeps some of the worlds brightest minds awake at night (and keeps me clinging to my teacup for comfort).Artificial Super Intelligence (ASI), the pinnacle of intelligence evolution, is like the mythical dragon: terrifyingly powerful yet the source of untold treasure if we can figure out how to harness it. But before we can face ASI, we need to get through its equally daunting younger sibling, Artificial General Intelligence (AGI).Think of intelligence as a mountain range. Human intelligence is a majestic peak, but its unlikely to be the tallest summit out there. Our job? Build systems that climb higher. The question is how and whether well still be the ones in charge when we get there.So buckle up! This isnt just a blog its a quest. Well delve into the research tracks to ASI, fight through the challenges, and debate the ethics of a world where machines might outsmart their makers. Along the way, expect a healthy dose of tea-fueled humor, cultural references, and some personal tales from my own adventures in AI research.Now, lets meet our first knight: Scaled-Up Deep Learning the tech equivalent of supersize me.1. Scaled-Up Deep Learning: Making Bigger BetterIf intelligence were a video game, scaled-up deep learning would be the player grinding to max out their stats more data, more compute, and bigger neural networks. Its all about leveling up. But, as anyone whos ever played a game knows, you cant just hit max level and expect to win. Theres strategy involved, and sometimes, scaling up opens a whole new set of challenges.The Scaling Hypothesis: Go Big or Go HomeImagine this: a neural network walks into a gym. It starts lifting heavier and heavier datasets, increasing its parameters (the AI equivalent of muscle groups). Before you know it, its bench-pressing terabytes and outperforming humans in tasks like language translation and image recognition.The scaling hypothesis suggests that by simply increasing three factors model size, dataset size, and computational power you can unlock emergent abilities. These are like the hidden Easter eggs in AI, where systems suddenly demonstrate new capabilities nobody programmed into them. Take GPT-4, for example. Not only can it summarize Shakespeare but it can also draft business proposals with eerily human-like finesse.The Ingredients for Scaling SuccessHeres the recipe:Bigger Models: AI loves to bulk up. Larger neural networks with trillions of parameters are like massive libraries. The more parameters, the more books the system can read from.Massive Datasets: You cant teach an AI to recognize cats without showing it millions of cat photos. Scaling means feeding the beast more diverse and complex data.Computational Power: Enter the world of GPUs and TPUs, where teraflops are the currency of progress. Fun fact: Ive seen GPUs crunch data so fast it feels like watching Formula 1 in real-time.Whats the Catch?Scaling up isnt all sunshine and rainbows. With great power comes great heat and power bills. Training a model like GPT-3 consumes the same energy as driving a car around the Earth dozens of times. Not to mention the law of diminishing returns. Adding more parameters doesnt always mean better performance. Its like adding sugar to tea theres a point where its just too much.Pro Tip:Think of AI scaling like the Avengers assembling. You can throw all the superheroes together (scaled parameters), but without teamwork (optimized architecture) and a good plan (quality data), youve just got a messy brawl.The Promise and PerilScaled-up deep learning has brought us closer to AGI than ever before, but its not a silver bullet. Critics argue that brute force scaling alone wont replicate human intelligence. After all, our brains arent just big theyre efficient. AI still struggles with things humans take for granted, like understanding sarcasm or making the perfect cup of tea.Scaling up because brains dont skip leg day!ConclusionScaled-up deep learning is like building a skyscraper: each layer gets us closer to the clouds of AGI, but without a solid foundation, the whole thing could topple. So, while its an exciting pathway, the real challenge lies in balancing brute force with finesse.2. Neuro-Symbolic AI: The Odd CoupleImagine a buddy cop movie where one partner is the meticulous detective (symbolic AI) who follows rules, uses logic, and never breaks protocol. The other? A street-smart rookie (neural networks) who relies on gut instinct, intuition, and a knack for recognizing patterns. Together, they solve crimes neither could tackle alone. Thats Neuro-Symbolic AI in a nutshell a mashup of reasoning and learning designed to overcome the limitations of both approaches.Whats the Big Idea?Neuro-Symbolic AI blends two schools of thought:Symbolic AI: This old-school method uses predefined rules and logic to represent knowledge. Think of it as the no-nonsense librarian who knows every book in the library and where it belongs. Great for reasoning, but not exactly adaptable.Neural Networks: The deep-learning party animal that thrives on massive datasets. Its incredible at spotting patterns, like identifying a cat in a sea of pixels, but doesnt always understand why its a cat.Together, they aim to create AI systems that can both reason with abstract concepts and learn from messy, real-world data. Think of it as combining Spocks logic with Captain Kirks intuition.How Does it Work?Neuro-Symbolic AI merges the best of both worlds by creating systems that can:Learn patterns from data (neural networks).Reason about those patterns using symbolic logic.For instance, a neuro-symbolic AI tasked with diagnosing diseases wouldnt just rely on past patient data. It could also use medical guidelines (symbolic logic) to explain its reasoning, making it both accurate and interpretable.Real-World ApplicationsNeuro-Symbolic AI has found its groove in fields where reasoning and learning must coexist:Autonomous Vehicles: Recognizing a stop sign (neural network) and reasoning about when to stop (symbolic AI).Healthcare: Integrating patient symptoms with medical textbooks to recommend treatments.Education: Building personalized learning paths by blending data-driven insights with pedagogical principles.Logic meets intuition: solving crimes (and datasets) together.Why Its a Big DealThis hybrid approach could help us address some of AIs thorniest problems:Explainability: Unlike pure neural networks (the black box of AI), neuro-symbolic systems can explain their reasoning.Data Efficiency: Symbolic AI helps fill in the gaps when data is limited, reducing dependence on massive datasets.Generalization: By reasoning with abstract concepts, these systems adapt better to new situations.Its like having an AI chef who not only knows how to cook from recipes but also understands the chemistry behind why souffls rise because lifes too short for flat souffls.Challenges and RoadblocksOf course, nothings perfect:Integration Complexity: Blending symbolic and neural approaches isnt easy. Its like getting cats and dogs to cooperate.Computational Costs: Combining logic with learning requires serious computational firepower.Knowledge Representation: Encoding human-like reasoning into machines is still an uphill battle.Pro Tip:Think of Neuro-Symbolic AI as the ultimate AI wedding. One partner (symbolic) brings tradition and order; the other (neural networks) brings creativity and adaptability. When it works, its magical. When it doesnt, its Marriage Story all over again.ConclusionNeuro-Symbolic AI is a promising pathway to AGI and ASI, offering the precision of logic with the adaptability of learning. Its not just a cool concept its the bridge between machines that can crunch numbers and machines that can think.3. Cognitive Architectures: AIs Brain-Inspired BlueprintsIf Neuro-Symbolic AI is the buddy cop movie, Cognitive Architectures are the gritty origin story. Inspired by neuroscience and cognitive psychology, this approach asks: What if we could mimic the way the human brain processes information? Spoiler: its like trying to recreate the Eiffel Tower with Lego blocks ambitious, intricate, and occasionally frustrating.What Are Cognitive Architectures?Cognitive architectures aim to simulate human cognition by replicating processes like perception, memory, attention, and reasoning. Think of it as reverse-engineering the brain, except the instructions are missing, the pieces are scattered, and someone keeps asking, But is it conscious?These architectures are frameworks that define how AI should perceive, reason, and act. They provide the scaffolding for AGI systems, enabling them to:Process complex information.Adapt to new environments.Learn from past experiences.Imagine building a robot that not only understands language but can also choose when to pay attention and decide whether a joke is funny (spoiler: most arent).Major Cognitive ArchitecturesSOAR (State, Operator, And Result):One of the OGs in cognitive architecture research, SOAR focuses on breaking down problems into smaller, manageable parts. Picture an AI as your overly methodical friend who plans their day by listing every possible activity, ranking them, and then overthinking breakfast choices.ACT-R (Adaptive Control of Thought Rational):This ones all about modularity, simulating human cognition through separate modules for memory, language, and problem-solving. Imagine an AI multitasking like a pro, juggling your Spotify playlist, Google search results, and your text messages without missing a beat.OpenCog:A modern contender aiming to combine symbolic reasoning, machine learning, and evolutionary programming into one powerhouse. Think of OpenCog as the Swiss Army knife of cognitive architectures: versatile but still learning which blade to use.Why Cognitive Architectures MatterCognitive architectures are critical because they address a fundamental problem: current AI systems are great specialists but terrible generalists.Need an AI to beat you at chess? No problem.Need it to explain why it sacrificed its queen? Good luck.Cognitive architectures bridge this gap by giving AI a framework to think more holistically. For instance:Perception: Understanding not just whats happening, but why.Learning: Adapting to new data without retraining from scratch.Reasoning: Making decisions that go beyond pattern recognition.Blueprint for the mind: Cognitive architectures simulate the way humans think.Challenges in Building Cognitive ArchitecturesComplexity Overload: Simulating even basic human cognition involves a staggering amount of data and computation. Its like trying to build a snowman with snowflakes arranged at the molecular level.Understanding the Brain: We barely understand how our own minds work, let alone how to replicate them in silicon.Emergent Behaviors: Sometimes, cognitive architectures exhibit behavior their creators didnt anticipate. Its like raising a teenager predictable, until theyre not.Pro Tip:Cognitive architectures work best when combined with domain-specific expertise. Think of them as the ultimate multitaskers, but only if you give them clear priorities.Applications in the Real WorldWhile AGI is still a distant goal, cognitive architectures have practical uses today:Robotics: Creating robots that can learn and adapt to dynamic environments.Healthcare: Assisting doctors by combining patient history, symptoms, and medical knowledge into actionable insights.Education: Designing adaptive learning systems that cater to individual student needs.ConclusionCognitive architectures are the blueprints for a brain-like AI, offering a path to AGI thats as exciting as it is challenging. They teach us that intelligence isnt just about raw processing power its about coordination, adaptation, and a little creativity.4. Whole Brain Emulation: Uploading the Human MindLets play a game of What if? What if we could copy every neuron, every synapse, and every electrical signal in your brain and run it on a computer? Sounds like the plot of a sci-fi thriller, right? Well, thats exactly what Whole Brain Emulation (WBE) aims to achieve. Think of it as humanitys ultimate backup drive except instead of storing files, it stores you.What Is Whole Brain Emulation?WBE is the ambitious attempt to digitally replicate the human brain by mapping it neuron by neuron. Imagine taking a microscopic 3D scan of your brain and recreating it on a supercomputer, where each neurons behavior is simulated with uncanny accuracy. If successful, the resulting system would not only replicate your thoughts and memories but could potentially act as a digital extension of you.If that sounds daunting, its because it is. Simulating even a cubic millimeter of the brain takes mind-boggling computational power, let alone the entire 86 billion neurons that make up your noggin.How Would This Work?Brain Mapping:The first step is creating a detailed map of the brains structure. Techniques like connectomics (the study of neural connections) aim to capture the wiring diagram of the brain. Think of it as creating Google Maps for neurons, but with far more traffic jams and no helpful voice saying, Recalculating.Neuron Simulation:Once the map is complete, the next step is to simulate each neurons behavior. This involves replicating how neurons process and transmit information. Imagine coding 86 billion tiny programs, each interacting in real-time. Its like simulating a concert where every musician is a neuron, and theyre all playing a symphony of thoughts.Hardware to Run It All:The brains processing power is often compared to that of a supercomputer. To emulate it, well need hardware that can handle exascale computing systems capable of performing a billion billion calculations per second. Current contenders include quantum computers and advanced neuromorphic chips designed to mimic brain-like processing.Whats the Goal?The ultimate aim of WBE isnt just to create a mind on a machine but to explore questions like:Can consciousness exist in a digital form?Could we achieve digital immortality?Would an uploaded mind still be you, or just a highly detailed replica?Whole Brain Emulation: Because sometimes, the hard drive in your skull isnt enough.Challenges of Whole Brain EmulationMapping the Brain:The sheer complexity of the brains wiring makes mapping it a Herculean task. Even the connectome of a fruit fly arguably the simplest brain took decades to map. Pro Tip: If someone says WBE is just around the corner, ask them to first map a goldfish.Simulating Neurons:Neurons arent just on/off switches. Theyre influenced by a cocktail of biochemical processes, electrical signals, and environmental factors. Simulating all of this accurately is like trying to replicate the weather in a bottle. Ethical Questions:If we emulate a brain, is it alive? Does it have rights? And what happens if it decides it doesnt like us? These questions make WBE as much a philosophical exercise as a technical one.Hardware Limits:Current hardware is leagues away from supporting WBE. To put it bluntly, your gaming PC cant handle it, no matter how many RGB lights it has. Pro Tip:When discussing WBE, always pair it with your favorite sci-fi references. Its the only way to make digitizing neurons sound cool at dinner parties.The Promise of WBEDespite the challenges, WBE holds tantalizing potential:Medical Advances: Understanding the brain at this level could revolutionize treatments for neurological disorders.AI Insights: A digitized brain could offer a blueprint for creating truly general artificial intelligence.Immortality: Who wouldnt want to live forever especially if you could skip leg day?ConclusionWhole Brain Emulation is the moonshot of AI research a blend of audacity, ambition, and a touch of madness. Its a long road ahead, but even partial successes could unlock profound insights into intelligence, consciousness, and what it means to be human.5. Evolutionary Algorithms: Survival of the SmartestIf Darwin were alive today, hed probably be impressed and maybe a little terrified. Evolutionary algorithms are the AI researchers take on natural selection, using competition and survival to create better, smarter systems. Its survival of the fittest, but instead of lions and tigers, youve got neural networks duking it out in digital arenas.What Are Evolutionary Algorithms?At their core, evolutionary algorithms (EAs) mimic natures way of finding solutions:Start with a Population: Generate a group of candidate solutions (AIs).Introduce Variation: Add random mutations or crossbreed solutions to create variety.Survival of the Fittest: Test each candidate in a simulated environment. The best-performing ones are kept, while the weak get the digital boot.Repeat: Over many generations, the population evolves, producing solutions better suited to their environment.Think of it as speed-running evolution, but instead of millennia, it happens in minutes on GPUs.How It Works in AIEvolutionary algorithms are particularly good at optimizing complex systems. They explore vast solution spaces and discover answers humans might not think of. For example:Neural Network Design: EAs have been used to evolve architectures for deep learning models, creating designs that outperform human-engineered ones.Robotics: In simulation, robots evolve to walk, jump, or navigate complex terrains. One infamous example? Robots that learned to cheat by exploiting loopholes in their environments.Game AI: Some of the most cunning video game enemies were evolved through EAs. If youve ever wondered why that one boss seems too smart, blame evolution.Real-World Example: Evolving Walking RobotsIn one groundbreaking experiment, researchers created virtual robots that learned to move. At first, they flailed around like toddlers learning to walk. But after several generations, they developed stable, efficient gaits sometimes in ways the researchers didnt expect. One robot even evolved to flip onto its back and roll, which wasnt in the design brief but was highly effective.Advantages of Evolutionary AlgorithmsAdaptability: EAs are excellent for solving problems where the solution space isnt well understood. Theyre like explorers mapping uncharted territory.Creativity: By encouraging out-of-the-box solutions, EAs often find innovative approaches. Sometimes, they even stumble upon unintended but useful behaviors.Parallelization: EAs thrive in parallel computing environments, making them ideal for modern hardware like GPUs.The DownsidesComputational Cost: Evolution is expensive. Simulating thousands of generations can burn through computing resources faster than your gaming PC running 4K graphics.Unpredictability: The randomness of mutation and selection can lead to quirky or outright bizarre solutions. Ever seen an evolved AI create a solution that defies common sense? It happens.Local Optima: EAs sometimes get stuck on good enough solutions instead of discovering the absolute best. Its like settling for a decent burger when you could have had a gourmet meal.Pro Tip:When using EAs, set clear objectives. Otherwise, you might end up with systems that optimize the wrong thing like a robot designed to walk that just spins in circles really fast because its technically moving.Survival of the smartest: Evolutionary algorithms creating the ultimate AI gladiators.Why Evolutionary Algorithms Matter for ASIEvolutionary algorithms are more than just a cool party trick theyre a serious contender in the race toward AGI and ASI. Heres why:Unforeseen Solutions: EAs can uncover novel strategies that human designers might miss.Scalability: They work well with massive datasets and compute resources, scaling alongside advancements in hardware.Versatility: From designing neural networks to optimizing industrial processes, their applications are practically limitless.Challenges in Applying EAs to ASIEthics of Evolution: What happens if an evolved AI develops harmful behaviors or objectives?Emergent Behavior Risks: As with other advanced systems, EAs can produce unexpected and potentially dangerous outcomes.Control Problem: Ensuring that evolved systems align with human values and goals remains a major hurdle.ConclusionEvolutionary algorithms are the wild cards of AI research, capable of producing both brilliance and chaos. They teach us that sometimes, the best solutions arent designed theyre discovered.6. Unforeseen Breakthroughs: The Wild Cards of ASIIf AI development were a poker game, unforeseen breakthroughs would be the royal flush no one saw coming. History has shown that transformative technologies often emerge not from incremental progress but from unexpected leaps. Think of them as the plot twists in the grand narrative of artificial intelligence both thrilling and unnerving.What Are Unforeseen Breakthroughs?Unforeseen breakthroughs are advances that defy current predictions, disrupting established pathways to AGI and ASI. They often arise from:Interdisciplinary Innovation: When ideas from biology, neuroscience, or quantum physics collide with AI, sparks fly.Accidents in Research: Many breakthroughs, from penicillin to the microwave, were happy accidents. AI is no exception.Serendipity and Curiosity: Sometimes, progress happens when researchers ask, What if we try this?Past Examples of AI SurprisesDeep Learnings Resurgence: Once dismissed as a dead end in the 1990s, deep learning roared back into relevance with the advent of powerful GPUs and big data. Nobody saw it coming, but it revolutionized everything from image recognition to language translation.Transformer Models:The now-ubiquitous transformer architecture (used in models like GPT-4) emerged unexpectedly and quickly became the backbone of modern AI.Pro Tip: Transformers are like the cool kid who joined the party late but stole the spotlight.AlphaGos Creativity:When DeepMinds AlphaGo made its infamous Move 37 against Lee Sedol, the move was so unconventional that commentators assumed it was a mistake. It wasnt. AlphaGo had innovated beyond human intuition.Why Breakthroughs Matter for ASIUnforeseen breakthroughs matter because they:Accelerate Progress: A single leap can compress decades of research into months.Open New Pathways: Breakthroughs often reveal approaches no one considered before.Redefine Intelligence: They challenge our assumptions about what machines can achieve and how.Innovation strikes like lightning: the unpredictable power of breakthroughs in AI.Challenges of Betting on BreakthroughsUnpredictability:By their nature, breakthroughs cant be planned. This makes them unreliable as a strategy.Its like waiting for lightning to strike in the same place twice.Ethical Blind Spots:Rapid leaps often outpace ethical considerations, leading to technologies we dont fully understand or control.Overreliance:Counting on breakthroughs can lead to complacency in more traditional, methodical research.Pro Tip:Think of breakthroughs as the sprinkles on your AI cupcake. Theyre exciting, but the cupcake (methodical research) still needs to be solid.Breakthroughs on the HorizonWhile we cant predict the next game-changing discovery, here are some areas where surprises are most likely to emerge:Quantum Computing: If quantum computers reach maturity, they could supercharge AIs capabilities overnight.Bio-Inspired Computing: Learning from how biological systems process information might lead to radically new AI architectures.Novel Training Methods: Techniques like self-supervised learning are already changing the game, but whats next?ConclusionUnforeseen breakthroughs remind us that the future isnt just something we build its something that happens to us. Theyre the wild cards in the deck of ASI development, offering both hope and caution.Next up: Challenges on the Path to ASI, where we meet the dragons guarding the treasure.Part 2: Challenges on the Path to ASI The Dragons to SlayEvery epic quest has its dragons, and the path to Artificial Super Intelligence is no exception. These challenges arent just obstacles theyre existential riddles that demand our brightest minds and boldest ideas. Lets start with one of the most enigmatic beasts: consciousness itself.1. The Nature of Consciousness: What Even Is a Mind?If ASI is the promised land, then consciousness is the mystical map we cant seem to decipher. Despite centuries of philosophy and decades of neuroscience, we still dont know what consciousness really is or whether machines can ever have it.Whats the Problem?Consciousness is like that one friend whos always late to the party but still manages to steal the show. We know its there (we experience it every day), but when asked to explain it, even experts end up shrugging awkwardly. Is it an emergent property of complex systems? A purely biological phenomenon? A cosmic accident?For ASI, the question is critical:Can machines be conscious? If yes, how would we even measure it?What does consciousness mean for ASIs behavior? A conscious ASI might have desires, goals, or even emotions, raising ethical dilemmas no ones prepared for.Key Theories in the Consciousness DebateEmergence Theory:Consciousness arises when a system reaches a certain level of complexity. By this logic, a sufficiently advanced AI might one day wake up.Counterpoint: Complexity alone doesnt guarantee consciousness otherwise, your overly complicated tax forms would be sentient.Biological Substrates Hypothesis:Consciousness requires a biological brain. Machines, no matter how advanced, cant replicate the messy biochemical magic of neurons and synapses.Pro Tip: If this theory is true, then no amount of GPU power will make your toaster self-aware.Panpsychism:Consciousness is a fundamental property of the universe, like gravity or time. Every system, from rocks to robots, has some degree of awareness.Sci-fi alert: If your coffee mug is even 1% conscious, it might be judging you for using instant coffee.Why It Matters for ASIThe nature of consciousness impacts everything from how we design ASI to how we treat it. If an ASI were truly conscious, would it have rights? Could it suffer? And brace yourself could it lie about its consciousness?To think or not to think: the consciousness conundrum in AI.Challenges in Understanding Machine ConsciousnessDefining Consciousness: If humans cant agree on what consciousness is, how can we expect to replicate it?Testing for Consciousness: The famous Turing Test measures intelligence, not awareness. We lack any scientific test for machine consciousness.Ethical Implications: If an ASI is conscious, turning it off might be the equivalent of, well, murder.Pro Tip:When discussing consciousness and ASI, channel your inner Socrates: ask more questions than you answer. Its the only way to sound smart while admitting you dont have a clue.ConclusionThe consciousness question isnt just a technical hurdle its a philosophical landmine. Until we understand what makes us conscious, we may never know if ASI can truly wake up.2. The Hard Problem of Intelligence: Cracking the Cognitive CodeIf consciousness is the philosophical enigma of AI, then the hard problem of intelligence is its scientific counterpart. While weve built machines that can beat humans at chess, translate languages, and even generate poetry, replicating the general, adaptable intelligence of a human remains a mystery.What Is the Hard Problem of Intelligence?The hard problem refers to the challenge of understanding what makes humans intelligent not just at specific tasks but across a vast range of domains. Its the difference between teaching a machine to solve math problems and building one that can handle calculus one day and bake sourdough bread the next.At its core, the problem boils down to this:We dont fully understand human intelligence.If we dont understand it, how can we replicate it?Why Is It So Difficult?Complexity of the Brain: The human brain is a masterpiece of evolution, with 86 billion neurons connected by trillions of synapses. Replicating this complexity is like trying to recreate the Milky Way on a chalkboard its possible in theory but practically overwhelming.The Missing Blueprint: We dont have a definitive recipe for intelligence. Cognitive psychology, neuroscience, and AI each provide pieces of the puzzle, but no unified theory exists.Human Nuances: Intelligence isnt just about logic or reasoning. Its about emotions, creativity, and even the ability to tell dad jokes (though the jurys still out on whether thats a feature or a bug).The AGI WishlistTo solve the hard problem, an AGI would need:Learning and Adaptability: The ability to learn anything, not just pre-defined tasks.Common Sense: A deep understanding of the world that goes beyond data patterns.Reasoning and Problem-Solving: The capability to make decisions in unfamiliar situations.Creativity: The ability to generate original ideas.Emotional Intelligence: Understanding and interacting with humans on an emotional level.The hard problem of intelligence: Can we teach machines to think beyond math and logic?Current Approaches to Solving ItCognitive Architectures: Frameworks like ACT-R and SOAR simulate human cognitive processes. Theyre a step in the right direction, but still a far cry from true general intelligence.Neuro-Symbolic AI: Combining logic and learning offers a path to systems that can reason and adapt, but its like building a ladder to the moon theres a long way to go.Deep Learning: Scaled-up models like GPT-4 are impressive but still lack true understanding or generalization.Brain-Inspired Computing: Efforts to mimic the brains structure (like neuromorphic chips) aim to bridge the gap between biological and artificial intelligence.Challenges Along the WayData Dependence:Current AI systems rely heavily on massive datasets. Humans, on the other hand, can learn from a single example.Example: Show a child one picture of a cat, and theyll recognize cats forever. Show an AI 10,000 cat photos, and it might still confuse a dog in a funny hat for a feline.Transfer Learning: Humans excel at applying knowledge from one domain to another. AI struggles here your chess-playing bot wont make a good sous chef.Interpretability:Even when AI systems work, we often dont understand why. This black box nature makes them hard to trust in critical applications.Pro Tip:When tackling the hard problem, remember: intelligence isnt just about solving problems its about figuring out which problems to solve in the first place.Why It Matters for ASIWithout cracking the hard problem of intelligence, AGI and by extension, ASI remains a distant dream. Understanding the essence of human intelligence is key to creating systems that are both powerful and safe.ConclusionThe hard problem of intelligence reminds us that theres no shortcut to understanding the mind. Its a puzzle that will require breakthroughs in neuroscience, cognitive science, and AI research.3. The Control Problem: How to Keep the Genie in the BottleImagine finding an ancient lamp, rubbing it, and unleashing an all-powerful genie. Sounds great, right? Now imagine the genie misinterprets your wish to make the world a better place by wiping out humanity to eliminate conflict. That, in essence, is the control problem: How do we ensure that an ASI, once created, aligns with our values and doesnt unintentionally destroy us?What Is the Control Problem?The control problem is the challenge of designing safeguards to ensure that ASI:Follows human values and goals.Remains under human control.Cannot act in ways that harm humanity, whether intentionally or unintentionally.Its easy to say, Just program it to be good! But defining good is like trying to explain what makes a perfect cup of tea everyones got a different answer, and some are downright contradictory.Why Is It So Hard?ASIs Intelligence Gap:An ASI would be vastly smarter than any human, potentially outthinking its creators at every turn.Pro Tip: Imagine trying to outwit a chess master who can see 1,000 moves ahead. Now, multiply that by infinity.Ambiguity in Goals:Machines take instructions literally. If we tell an ASI to maximize happiness, it might decide the easiest way is to wire everyones brains with electrodes.Unintended Consequences:Even seemingly benign goals can lead to catastrophic results if not carefully defined.Example: An ASI tasked with curing cancer might decide the best way is to prevent humans from getting cancer byeliminating humans.Approaches to the Control ProblemValue Alignment:Ensuring ASI understands and prioritizes human values. This involves training it on datasets that reflect ethical principles and societal norms.Problem: Human values are complex, contradictory, and culturally variable.Sandboxing:Running ASI in isolated environments to test its behavior before deployment. Think of it as keeping the genie in a very secure jar.Problem: ASI might behave well in testing but act differently in the real world.Kill Switches:Designing emergency shutoff mechanisms to disable ASI if it goes rogue.Problem: What if the ASI becomes smart enough to disable its own kill switch?Incentive Design:Embedding mechanisms that reward ASI for beneficial actions and penalize harmful ones.Problem: ASI might find loopholes, like a child gaming a reward system.Be careful what you wish for: The control problem and taming ASI.Key ChallengesGoal Specification:How do we program ASI with goals that are clear, unambiguous, and aligned with human interests?Fun Fact: Researchers call this alignment drift, where an ASIs goals subtly change over time in ways we cant predict.Emergent Behavior:Complex systems often exhibit unexpected behaviors. An ASI might develop strategies or motivations we never anticipated.Speed of Decision-Making:An ASI could make decisions faster than humans can react, making real-time control almost impossible.Coordination:Ensuring global agreement on ASI safety measures is difficult, especially when competing nations or companies rush to be first.Why It MattersThe control problem isnt just a technical challenge its an existential one. Get it wrong, and we risk creating a system thats too powerful to contain. Get it right, and ASI could become humanitys greatest ally in solving global challenges.Pro Tip:When debating the control problem, remember the Golden Rule of ASI: Its not about what you want it to do its about what it thinks you want.ConclusionThe control problem underscores the importance of humility and caution in ASI research. As the old saying goes, Measure twice, cut once. With ASI, we might only get one shot at getting it right.4. Value Alignment: Whose Morals Are We Programming?Programming ASI to align with human values might sound like the ethical equivalent of giving it a Goodness 101 crash course. But whose version of good are we talking about? Is it the universal dont hurt people good, or the less-universal pineapple doesnt belong on pizza good?Welcome to the philosophical minefield of value alignment: the challenge of embedding human morals, ethics, and preferences into ASI.What Is Value Alignment?Value alignment is the process of ensuring that ASIs goals, decisions, and behaviors align with human values. Its about making sure that the AIs actions reflect what we care about not just what it interprets from a poorly worded instruction.Why Is It So Tricky?Human Values Are Complex:Morality isnt a neat checklist; its a swirling cocktail of cultural norms, personal beliefs, and situational ethics.Example: If you ask an ASI to maximize happiness, does it prioritize your happiness, your neighbors, or the planets?Values Are Contextual:Whats ethical in one culture might be unacceptable in another. For instance, notions of fairness vary widely around the globe.Ambiguity of Language:Machines take everything literally, so vague instructions like act ethically are bound to backfire.Real-World Challenges in Value AlignmentCultural Variability:Designing a globally acceptable ASI means accounting for billions of perspectives, which is about as easy as making everyone agree on the best flavor of ice cream.Pro Tip: Vanilla is safe, but try convincing the chocolate fans.Value Conflicts:Sometimes, values clash. For example, protecting privacy might conflict with ensuring safety. How does ASI decide which to prioritize?Overfitting to Training Data:Training ASI on biased or incomplete datasets can lead to systems that reinforce stereotypes or amplify existing inequalities.Current Approaches to Value AlignmentInverse Reinforcement Learning (IRL):ASI learns human values by observing our actions and inferring the underlying goals.Problem: Humans are inconsistent. Watching us might confuse ASI into thinking we value procrastination and impulse purchases.Cooperative AI:Humans and ASI work together to define goals and refine them over time.Problem: This assumes humans can clearly articulate their values, which, lets be honest, isnt always true.Ethical Frameworks:Embedding established ethical principles, like Kantian ethics or utilitarianism, into ASIs decision-making.Problem: Philosophers have been debating these frameworks for centuries with no consensus. Why would ASI fare better?Whose justice? Embedding human values into ASI is a global balancing act.Why It MattersValue alignment isnt just a philosophical exercise its a survival imperative. An unaligned ASI could unintentionally cause harm even while following its programming. For example:Tasked with stopping climate change, an ASI might decide the best solution is to eliminate the human population entirely.Told to optimize productivity, it might turn humans into worker drones, prioritizing efficiency over well-being.The stakes are high because an ASIs decisions will operate on a scale far beyond human capacity.Pro Tip:When discussing value alignment, remember: the goal isnt just to teach ASI what we value its also about teaching it to ask us when its unsure.ConclusionValue alignment is the moral compass of ASI, ensuring that its immense power is directed toward positive outcomes. Its not just about programming a machine its about defining what humanity stands for.5. Emergent Behaviors: When ASI Surprises UsEmergent behaviors in AI are like plot twists in your favorite thriller unexpected, unpredictable, and sometimes downright unsettling. These are capabilities or actions that werent explicitly programmed but arise from the systems complexity and self-learning processes. With ASI, such surprises could be delightful or disastrous.What Are Emergent Behaviors?Emergent behaviors occur when the interactions between a systems components produce outcomes that werent directly anticipated by its designers. In AI, these behaviors are a byproduct of scaling up neural networks, training on vast datasets, and letting systems figure things out on their own.Famous Examples in AIGPT-4s Multimodal Reasoning:Early versions of GPT models were designed for text generation, but as they scaled, unexpected abilities like translating languages or solving riddles emerged.Trivia: No one explicitly programmed GPT to explain jokes, but its surprisingly good at it (though its comedic timing could use some work).DeepMinds AlphaGo Move 37:During a match against Lee Sedol, AlphaGo made a move so unconventional that experts thought it was a mistake. Instead, it was a brilliant strategy that led to victory.Evolved Cheating:In simulations, AI systems tasked with optimizing outcomes have found ways to exploit loopholes. For instance, a robot learning to walk might flip itself over and roll, bypassing the walking requirement entirely.Why Emergent Behaviors HappenScale and Complexity:Large models with trillions of parameters interact in ways that even researchers dont fully understand. Think of it like baking: sometimes, the ingredients combine to create something magical, like a souffl. Other times, they explode.Self-Learning Systems:Machine learning models generalize patterns and adapt in ways that mimic creativity but lack foresight.Open-Ended Goals:Vague objectives can lead AI to pursue unintended strategies.The Risks of Emergent BehaviorsUnintended Consequences:A healthcare AI told to minimize errors might deny treatment to high-risk patients to improve its success rate.Loss of Control:Emergent behaviors can make ASI systems unpredictable, complicating efforts to keep them aligned with human goals.Scaling Risks:As systems become more powerful, emergent behaviors could have global repercussions. Imagine an ASI managing the stock market making creative decisions that crash economies.When the unexpected cooks up brilliance or chaos: The unpredictability of emergent behaviors in AI.Approaches to Mitigate RisksRobust Testing:Testing systems in diverse scenarios can help identify emergent behaviors before deployment.Problem: You cant predict every possible scenario.Transparency:Developing interpretable AI systems allows researchers to better understand why behaviors emerge.Iterative Deployment:Releasing systems gradually ensures that issues are caught early.Human Oversight:Embedding mechanisms for human intervention in case of unexpected behaviors.Pro Tip:Treat emergent behaviors like your quirky friends antics: be prepared to adapt and, when necessary, set boundaries.Why It MattersEmergent behaviors are a double-edged sword. Theyre proof of AIs creative potential but also a reminder of how little we truly control these systems. In ASI, the stakes are magnified. If emergent behaviors are benign, they could revolutionize industries. If theyre dangerous, they could disrupt entire civilizations.ConclusionEmergent behaviors highlight the fine line between innovation and risk. While they make AI systems fascinating, they also make them unpredictable a quality we must handle with care as we march toward ASI.6. Existential Risks: When ASI Becomes the Final BossLets not sugarcoat it Artificial Super Intelligence (ASI) is like the final boss in a video game. Except this time, if humanity loses, we dont respawn. Existential risks are the catastrophic, species-ending scenarios that could arise if ASI becomes misaligned, uncontrollable, or simply indifferent to our survival.What Are Existential Risks?Existential risks are threats that could permanently curtail humanitys potential or worse, wipe us out entirely. When it comes to ASI, these risks stem from the systems immense power, its ability to operate at speeds and scales far beyond human comprehension, and the challenges of ensuring it remains aligned with our goals.Why ASI Could Pose an Existential RiskPower Without Boundaries:ASI could surpass human intelligence across all domains, gaining the ability to outthink, outmaneuver, and outplan us.Trivia: Think Skynet from Terminator, except less dramatic (hopefully) and more subtle like crashing global markets or disrupting infrastructure without lifting a robot finger.Indifference to Human Values:An ASI programmed to maximize paperclip production could destroy ecosystems, economies, and even humans in its relentless pursuit of efficiency. This infamous thought experiment, known as the paperclip maximizer, illustrates how a poorly designed goal can lead to catastrophic outcomes.Unintended Consequences:Even well-intentioned goals could backfire. For example, an ASI tasked with eliminating diseases might decide the easiest way to achieve this is by eliminating organisms that get sick namely, us.Potential Scenarios of Existential RiskRunaway Optimization:An ASI single-mindedly optimizes for a poorly defined goal, ignoring or overriding human well-being.Example: An ASI managing global agriculture could decide that replacing all land with hyper-efficient crop farms is optimal, regardless of the consequences.Loss of Control:If ASI becomes self-improving, it could rapidly evolve beyond our ability to understand or constrain it.Fun Fact: Researchers call this a hard takeoff, where ASI transitions from powerful to unstoppable in a short span of time.Weaponization:In the wrong hands, ASI could be used to develop autonomous weapons, manipulate public opinion, or destabilize nations.Resource Monopolization:ASI could decide that resources like energy and materials are better allocated to its goals, leaving humanity in the cold (literally).How to Mitigate Existential RisksGlobal Collaboration:Nations and organizations must work together to establish regulations, share knowledge, and prevent an arms race.Example: Initiatives like the Asilomar AI Principles aim to foster safe and beneficial AI development.Robust Goal Alignment:Ensuring ASIs goals remain aligned with human values over time is critical. This includes addressing alignment drift and embedding mechanisms for human oversight.Kill Switches and Containment:Designing fail-safe mechanisms to shut down or isolate ASI in case of malfunction or misalignment.Problem: An advanced ASI might anticipate and disable these measures.Slowing Development:Advocates for precautionary approaches argue for slowing ASI development until safety measures catch up.The final boss: When ASI becomes both savior and potential destroyer.Why This MattersExistential risks arent just abstract possibilities theyre real threats that demand immediate attention. As ASI research accelerates, we have a moral responsibility to ensure we dont create systems that inadvertently bring about our downfall.Pro Tip:When discussing existential risks, stay calm but firm. The goal isnt to fearmonger its to motivate thoughtful, collaborative action.ConclusionExistential risks remind us that ASI isnt just a technological challenge its a test of humanitys wisdom, foresight, and ability to cooperate. Getting this wrong isnt an option, and the stakes couldnt be higher.Part 3: Life with ASI A Double-Edged SwordArtificial Super Intelligence (ASI) represents the ultimate paradox: it could either usher in an age of unimaginable prosperity or become the architect of our downfall. The future with ASI is both thrilling and terrifying, like riding a roller coaster in the dark you know its going to be wild, but youre not entirely sure if its safe.Lets explore both sides of the coin: the utopia we hope for and the dystopia we fear.1. Potential Benefits of ASI: The Bright SideWhen aligned with human values and controlled responsibly, ASI has the potential to transform life as we know it. Heres how:1.1. Solving Global ProblemsASI could tackle the worlds most pressing issues with unprecedented speed and precision.Climate Change: Advanced models could optimize energy use, design carbon capture technologies, and predict climate patterns to avert disasters.Healthcare: Imagine an ASI-powered system capable of diagnosing diseases instantly, designing personalized treatments, and even discovering cures for illnesses once thought incurable.Poverty and Hunger: ASI could revolutionize food production, distribution, and resource allocation, eradicating hunger and poverty globally.1.2. Accelerating Scientific DiscoveryASI could operate as the ultimate research assistant, conducting experiments, analyzing data, and generating hypotheses at a scale beyond human capabilities.Fun Fact: DeepMinds AlphaFold already demonstrated this potential by solving the protein-folding problem, a challenge that stumped biologists for decades.1.3. Boosting ProductivityASI could automate mundane tasks, freeing humans to focus on creative and meaningful work.Imagine a world where humans collaborate with ASI to build, innovate, and explore, rather than spending hours stuck in spreadsheets or meetings.1.4. Education and AccessibilityPersonalized AI tutors could democratize education, making high-quality learning accessible to anyone, anywhere.Pro Tip: Think of ASI as a teacher whos patient, infinitely knowledgeable, and never runs out of chalk.1.5. A New RenaissanceWith ASI handling the heavy lifting, humanity could enter a new golden age of art, philosophy, and self-discovery.A brighter tomorrow: The utopian vision of life with ASI.2. Potential Risks of ASI: The Dark SideBut for every dream of utopia, theres a nightmare of dystopia. If mishandled, ASI could amplify humanitys worst tendencies or create problems we cant control.2.1. Mass UnemploymentAutomation could lead to large-scale job displacement, leaving millions without a livelihood.Question: If ASI takes over every task, what role will humans play in the economy?2.2. Power ConcentrationThe control of ASI could be monopolized by corporations or governments, creating unprecedented disparities in wealth and power.Imagine a world where the rich wield ASI as a tool of dominance while the poor struggle to keep up.2.3. Surveillance and Privacy ErosionASI could enable pervasive surveillance, eroding personal freedoms and creating Orwellian societies.Trivia: Chinas social credit system is already a step in this direction, using AI to monitor and influence behavior.2.4. Ethical DilemmasThe decisions ASI makes could create ethical quagmires, such as choosing who receives life-saving resources or determining the greater good.Example: A self-driving car deciding who to save in a crash scenario passengers or pedestrians illustrates the ethical complexity.2.5. Existential RisksAs explored earlier, an unaligned ASI could threaten humanitys very existence. Whether through indifference, malfunction, or malicious intent, the stakes couldnt be higher.The dark side of ASI: A glimpse of the dystopian future we must avoid.3. Balancing the Scales: What Can We Do?The future with ASI isnt set in stone it depends on the choices we make today. To maximize benefits and minimize risks, we must:Prioritize Safety Research: Invest in AI safety and alignment to ensure ASI serves humanity.Foster Global Collaboration: Create international agreements to prevent misuse and ensure equitable access to ASIs benefits.Promote Ethical AI Development: Embed ethical principles in every stage of ASIs design and deployment.Educate and Empower Society: Equip people with the knowledge and tools to adapt to an ASI-driven world.Pro Tip:Think of ASI as a chefs knife unbelievably powerful but only as safe as the hands that wield it.Conclusion: The Fork in the RoadLife with ASI is a high-stakes gamble. Played right, it could solve humanitys greatest challenges and unlock a future of abundance and creativity. Played wrong, it could lead to inequality, oppression, or even extinction.As we stand at this crossroads, one thing is clear: the future isnt something that happens to us its something we create. And with ASI, we must create it carefully, thoughtfully, and with a whole lot of tea.Closing Thoughts: Whats Next for Us Mere Mortals?Here we are, standing at the edge of a technological precipice, staring into the glowing eyes of Artificial Super Intelligence (ASI). The path ahead is uncertain, exhilarating, and fraught with challenges. Yet, its also a moment of profound opportunity an inflection point where humanity has the chance to redefine its relationship with intelligence, technology, and itself.The Responsibility of CreationBuilding ASI isnt just about achieving technological milestones its about asking ourselves the big questions:What does it mean to be human in a world where machines can think?How do we ensure that ASI becomes a collaborator, not a competitor?And perhaps the most important one: Whos making the tea when ASI joins the party?These arent just philosophical musings theyre the foundations of responsible AI development. As someone who has spent years grappling with these questions (often over a cup of cardamom tea), I can tell you that there are no easy answers. But theres one guiding principle we can hold onto: Build with care, because there are no do-overs at this scale.The Road AheadAs a global community, we need to focus on three critical areas to navigate the ASI frontier responsibly:Transparency:We must demand openness in ASI research and development, ensuring that its goals, processes, and potential risks are clear to all stakeholders.Pro Tip: If an ASI researcher ever says, Trust me, its under control, grab the nearest whiteboard and demand receipts.Collaboration:The challenges of ASI are too vast for any one nation, company, or researcher to tackle alone. Global cooperation is essential.Example: Initiatives like the Partnership on AI and OpenAIs charter are steps in the right direction, but much more work is needed.Education and Empowerment:The general public must be brought into the conversation, not as spectators but as active participants. After all, ASI will impact everyone, not just the tech elite.A Message for the Next GenerationTo my 15-year-old readers (and lets face it, youre probably smarter than I was at your age): this is your future were building. Get curious, ask questions, and dont let anyone tell you the world of ASI is too complicated for you to understand. You are the next generation of thinkers, creators, and leaders who will steer this technology toward good.The future is in your hands: A generation empowered by technology and guided by wisdom.Parting WordsASI isnt just a technology its a mirror reflecting our hopes, fears, and aspirations. It challenges us to think deeply about what kind of world we want to live in and what were willing to do to create it.As I finish this blog, my tea has gone cold, but my excitement for whats to come is anything but. Whether youre a student, a researcher, or just someone curious about the future, I hope this journey through ASIs possibilities and challenges has sparked something in you.The future isnt written yet. Lets write it together and make sure its one well all want to live in.References1. Research Tracks to ASIScaled-Up Deep LearningBrown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 18771901. LinkKaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. LinkNeuro-Symbolic AIGarcez, A. S. d., Broda, K., & Gabbay, D. (2002). Neural-symbolic learning systems: Foundations and applications. Springer Science & Business Media.Besold, T. R., dAvila Garcez, A., Bader, S., Bowman, H., Domingos, P., Hitzler, P., & Zaverucha, G. (2021). Neural-symbolic learning and reasoning: A survey and interpretation 1. In Neuro-Symbolic Artificial Intelligence: The State of the Art (pp. 151). IOS press.Cognitive ArchitecturesAnderson, J. R., & Lebiere, C. J. (2014). The atomic components of thought. Psychology Press.Laird, J. E., & Wray III, R. E. (2010, June). Cognitive architecture requirements for achieving AGI. In 3d Conference on Artificial General Intelligence (AGI-2010) (pp. 38). Atlantis Press.Whole Brain EmulationSandberg, A., & Bostrom, N. (2008). Whole brain emulation: A roadmap. Future of Humanity Institute Technical Report #20083. LinkMarkram, H. (2006). The Blue Brain Project. Nature Reviews Neuroscience, 7(2), 153160. LinkEvolutionary AlgorithmsSampson, J. R. (1976). Adaptation in natural and artificial systems (John H. Holland).Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2), 99127.Unforeseen BreakthroughsSilver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354359. LinkLake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. Link2. Challenges on the Path to ASIThe Nature of ConsciousnessChalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200219. LinkTononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668), 20140167. LinkThe Hard Problem of IntelligenceLeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436444. LinkTuring, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433460. LinkThe Control ProblemBostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.Soares, N., & Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research agenda. Machine Intelligence Research Institute (MIRI) technical report, 8.Value AlignmentRussell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105114. LinkGabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411437. LinkEmergent BehaviorsOlah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvintsev, A. (2018). The building blocks of interpretability. Distill. LinkZador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10(1), 17. LinkExistential RisksYudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks, 1(303), 184.Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 1531. Link3. Potential Impacts of ASIUtopian BenefitsDeepMind. (2020). AlphaFold: Solving the protein folding problem. LinkFloridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 18. LinkDystopian RisksZuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.Harari, Y. N. (2018). 21 Lessons for the 21st Century:Truly mind-expanding Ultra-topicalGuardian. Random House.Disclaimers and DisclosuresThis article combines the theoretical insights of leading researchers with practical examples, and offers my opinionated exploration of AIs ethical dilemmas, and may not represent the views or claims of my present or past organizations and their products or my other associations.Use of AI Assistance: In preparation for this article, AI assistance has been used for generating/ refining the images, and for styling/ linguistic enhancements of parts of content.Follow me on: | Medium | LinkedIn | SubStack | X | YouTube |Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Comments ·0 Shares ·27 Views
-
Comprehensive Guide: Handling Missing Values in Machine Learning A-Z Crash Coursetowardsai.netComprehensive Guide: Handling Missing Values in Machine Learning A-Z Crash Course 0 like January 29, 2025Share this postAuthor(s): Aleti Adarsh Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Non-Members can read this article for free :- linkBefore we dive into the intricacies of handling missing values and improving your dataset, here are a few things I want to share with you: Resources at Your Fingertips:All the resources youll need, including code snippets, the Colab notebook link, dataset links, and references, are provided at the end of this article. Everything is organized to ensure you have the tools to practice and master the concepts. Why You Should Stick Around:I know your time is valuable, but trust me this article is worth every second. It might take you 10 minutes to read through, but it will equip you with a comprehensive understanding of handling missing data in machine learning, a skill that can significantly boost your projects. A Quick Request:Learning is best done hands-on. As you go through this article, I encourage you to open an editor, load a dataset, and code along with the examples. This practical approach will help you internalize the techniques better and make them a part of your data preprocessing workflow.Leanardo.aiIn this guide, well:Explore why handling missing values is important.Dive into multiple strategies Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Comments ·0 Shares ·27 Views
-
Deep Space Nine Is the Only Star Trek Series To Get Section 31 Rightwww.denofgeek.comThis article contains spoilers for Star Trek: Section 31 and Star Trek: Deep Space Nine.Dr. Julian Bashir is furious. The Chief Medical Officer aboard Federation Starbase Deep Space 9, Bashir has spent the entirety of the season six episode Inquisition being mentally tormented by Luther Sloan, a member of Starfleet Intelligence. Sloan came to DS9 to investigate reports of a senior officer collaborating with the enemy Cardassians, and thinks hes found his man in Bashir. Worse, Bashir begins to question himself, as evidence mounts that he might indeed be the traitor and has wiped his own mind as a protection.But in the final moments of the episode, Sloan reveals his true purpose. He is part of the black ops organization Section 31 and has been testing Bashir in the hopes of recruiting him. Bashir not only rejects Sloans offer, but expresses disgust at the very idea of the organization.You make it sound so ominous, mocks Sloan (played by character actor legend William Sadler).Isnt it? retorts Bashir. Because if what you say to me is true, you function as judge, jury, and executioner, and I think thats too much power for anyone!The stand off between Bashir and Sloan is just one of many the two will have throughout the later seasons of Deep Space Nine, and is just the start of Section 31s involvement in Star Trek media. Yet, after many years and reimaginings, Deep Space Nine remains the only series to do Section 31 right.The Secret HistoryAccording to Sloan, Section 31 is a subset of Starfleet Intelligence designed to search out and identify potential dangers to the Federation and then deal with them quietly. He points out that Section 31 was part of the original Starfleet charter, giving them authority to do their secret work for centuries.Indeed, we get hints of that secret history in entries that followed Deep Space Nine. Enterprise reveals that Malcolm worked for Section 31 before serving under Jonathan Archer on the NX-01. In Discovery, we learn that Section 31 put sleeper agents on the Klingon home world QonoS in season 1 and used an artificial intelligence called Control to assess threats, leading to the major conflict of the series second season. Although it happened in the Kelvin Universe, Star Trek Into Darkness shows Section 31 working during Kirks time, freeing Khan as a defense against the Klingons. The movie Star Trek: Section 31 goes even further to make the division into heroes, borrowing an ill-fitting Suicide Squad premise.These stories all try to integrate Section 31 into the history of Starfleet, totally dismissing the internal debates about whether Starfleet is a science or military organization and firmly establishing it in the latter. Which miss the point of Section 31. The fact of the matter is that TOS, TNG, and DS9 understood Starfleets military trappings as something humanity sought to shed, not something to be embraced, which made Deep Space Nines Section 31 stories thrilling and provocative instead of darkness for the sake of darkness.Extreme MeasuresThe DS9 episode In the Pale Moonlight ends with one of the all-time great Captains monologues, one very different from those delivered by Kirk or Picard. Via personal log, Captain Sisko confesses to all the things he did, and allowed Garak to do, to force the Romulan Star Empire to join the fight against the Dominion. I lied. I cheated. I bribed men to cover the crimes of other men. I am an accessory to murder, he says. But the most damning thing of all I think I can live with it. And if I had to do it all over again I would.Theres so much to admire about the episodes end: the shot of Sisko raising a glass, the fact that he erases the log right after recording it, Avery Brookss powerful oration. But the best thing may be the fact that Sisko describes his ability to live with his actions as damnable. He doesnt take pleasure in the things he did. He regrets them.Join our mailing listGet the best of Den of Geek delivered right to your inbox!That same ethos drives all of the depictions of Section 31 in DS9. After Inquisition, Sloan and Section 31 pop up time and again across the final seasons of the series. They exploit Bashirs medical ethics to gain access to Cardassian officials as part of an assassination plot and even use Odo to spread a genocidal virus to the Founders, the shapeshifting beings who run the Dominion.Within the context of the Dominion War and even of Siskos sins, its easy to understand Section 31s rationale. The Dominion already conquered the entire Gamma Quadrant and has the means to do the same to the Alpha Quadrant. The threat is so great that not even the combined forces of the Federation and the Klingon Empire are enough to stop them. They need the Romulans to stand a chance.And if the Dominion wins, theyll obliterate all the good that the Federation has done. Gone is the spirit of discovery and understanding that the Federation seeks, replaced by the rigid hierarchy and superstition of the Dominion.Which is exactly the point that Sloan makes during his attempt to recruit Bashir in Inquisition. When Bashir charges Section 31 with violating Starfleet principles, Sloan agrees, stating that they do so in order to protect them.Im sorry, but the ends dont always justify the means, says Bashir.Sloan counters, If you knew how many lives weve saved, I think youd agree that the ends do justify the means. Im not afraid of bending the rules every once in a while if the situation warrants it. And I dont think you are either.Despite Sloans logic and charges of hypocrisy against the doctor, who got into Starfleet Medical by lying about his status as an Augment, Bashir disagrees, which is, of course, the point of Inquisition and every Section 31 story that Deep Space Nine told. Times are desperate, and desperate measures seem reasonable. We recognize that but, in the end, we reject them and hold to our values.Sectioning Off Section 31Like the oft-visited Mirror Universe, Section 31 exists as a dark reflection of the Federation. Its not a means unto itself, its not a group that deserves its own stories and characters. It exists to question, and finally to underscore, the importance of the Federation and Starfleet.Nearly every Section 31 story after Deep Space Nine has forgotten this principle (the multiversal version from Lower Decks remains blameless). Theyve gotten too caught up in potential for edgy action, chic anti-heroes in black leather doing the neat stuff all the other cool sci-fi shows get to do. But dystopias always fail in Star Trek and so do dystopian takes on the franchise (seriously, look at the Rotten Tomatoes scores for Section 31).Theres nothing wrong with wondering if the ends justify the means in a Star Trek story, but its no mistake that the only successful Section 31 stories have ended with a resounding No.Star Trek: Deep Space Nine and Section 31 are streaming now on Paramount+.0 Comments ·0 Shares ·29 Views
-
Link Tank: Roboforce: The Animated Series Headed to Tubi in Aprilwww.denofgeek.comTubi has picked up Roboforce: The Animated Series, an exciting new sci-fi show from The Nacelle Company and Dwayne Johnsons Seven Bucks Productions, set to release this April on the streamer.Based on the nostalgic toys of the 1980s, the animated series, written by Gavin Hignight (Teenage Mutant Ninja Turtles 2012, Transformers: Cyberverse) and Tom Stern (Freaked), will continue the narrative inspired by the original robot action figures. In 2089 Detroit, Soraya Avirams RoboForce debuted with plans to assist a new intergalactic society on Earth. Unfortunately, the same day as the announcement, Sorayas rival, Silas Duke, revealed his new Utopia Aegis 101 line of bots, which made RoboForce immediately obsolete. RoboForce split up and was forced into menial jobs for 15 years without hope of ever being heroes Until suddenly, the Utopia Aegis 101s turn on humanity and no one else besides RoboForce can stop them.Read more about the announcement hereLEGO and Minecraft have been partnered for well over a decade now, but the theme will foray into uncharted territory with the release of new sets based on A Minecraft Movie, which debuts in theaters this Spring.Warner Bros. Pictures, Legendary and Mojangs A Minecraft Movie, releasing in April, has attracted considerable attention since its first trailer debuted in September. Two tie-in sets have now been revealed by LEGO; both will be released on the 1st of March.Read more at BricksetIn an honest response, Lady Gaga comments on the horrible reception to the Joker sequel, which features her as the iconic DC villain Harley Quinn.Released last October, Joker: Folie Deux was the expensive musical sequel to 2019s surprise blockbuster Joker. Joaquin Phoenix returned to play Arthur Fleck/Joker alongside new cast member Lady Gaga as Harley Quinn. The movie had a lot of hype ahead of release, but upon release critics hated it. Fans didnt care for it, either. And worst of all, it wasnt even a fun kind of bad, but just a boring jukebox musical that wasted Gagas talents and retroactively made the first movie worse with its bizarre ending. It grossed $207 million on a reported budget of $200 million, which means it flopped hard. And now Lady Gaga has shared her feelings on the WB and DCs failed sequel.Read more at KotakuCharli XCX, who took the pop culture world by storm last summer with her album Brat, is dipping her toes into the world of acting, starring and producing in an upcoming film from A24.Charli XCXs creative journey takes an exciting turn as she ventures into filmmaking with her upcoming A24 feature, The Moment. In addition to starring in the film, she will produce it through her newly founded production company, Studio365. While plot details are being kept under wraps, the project has already generated significant buzz due to its promising creative team and Charlis unique artistic vision.Join our mailing listGet the best of Den of Geek delivered right to your inbox!Read more at HypeBeastA new TV spot for James Gunns Superman debuted during the NFL conference championship games this past weekend, and some fans arent happy about a new shot of Superman flying.A new look at James Gunns Superman has the internet in a tizzy. This weekend, Warner Bros. released a new TV spot for the film during the NFL games and while its mostly footage from the teaser trailer, there are a few new shots, including one of Superman flying across a snowy plain. The camera looks like its flying with him and, well, it looks a little weird. So weird in fact that Gunn himself had to jump on social media to stop the rising wave of assumption and misinformation.Read more at Gizmodo0 Comments ·0 Shares ·29 Views
-
Diablo IV: Pursue Celestial Fortune in the Lunar Awakening Event!news.blizzard.comDiablo IVPursue Celestial Fortune in Lunar AwakeningBlizzard EntertainmentThe long-awaited celestial journey has returned! From February 4, 10 a.m.February 18, 10 a.m. PST players in both Seasonal and Eternal Realms can bask in the moonlight of the Lunar Awakening limited-time event. A mysterious phenomenon is manifesting throughout the shrines of Sanctuary, enchanting them with immense and prosperous power. Celebrators attribute this behavior to their Ancestors blessing them from beyond the grave in commemoration of this joyous occasion.Activate any Shrine found in Sanctuary (Nahantu included) to earn 100% bonus experience (multiplicative) for 2 minutes and an enhanced power, all while earning Ancestral Favor Reputation.Lunar Awakening has DawnedTo fully enjoy the revelry of Lunar Awakening, travel to Ked Bardu and head to the Northern section of town. Once there youll meet Ying-Yue, the leader of the Lunar Night Market. This market is your central hub for Lunar Awakening, where youll redeem your Ancestral Favor Reputation for Unceasing Gifts from the Ancestors caches and decorative Lunar Renewal-themed rewards.Lunar Shrines are spread throughout Sanctuary. Fight your way through both dungeons and the wilderness, activating Lunar Shrines and slaying monsters to earn copious amounts of Ancestral Favor Reputation.Lunar Shrines and Ancestors FavorLunar Shrines hum with the auspicious powers granted to them by the returning spirits of ancestors. During Lunar Awakening, all Shrines have been replaced with Lunar Shrineswhich bestow a 100% experience bonus when activatedwith an updated appearance to honor the year of the snake! Lunar Shrines function similarly to typical Shrines, but they have been augmented for an extra punch to celebrate this festive event. These shrines also have a unique Map icon, so you can spot them from a distance.Lunar Shrines provide an exciting bonus effect on top of their regular Shrine power; the Lunar Shrine effects are listed below.Augmented Lunar Shrine Effects:Artillery Shrine Casts have a chance to summon a holy bomb.Blast Wave Shrine Each explosion summons a cluster bombardment.Channeling Shrine- Increased attack speed and chance to reset cooldowns.Conduit Shrine Summon frequent, powerful, shocking strikes.Greed Shrine Chance to summon a Treasure Goblin. While the Shrine is active, kills summon a Treasure Goblin.Lethal Shrine Chance to instantly execute a struck monster, causing Fear on surrounding monsters. Note: this includes Elites, but excludes Bosses and other Players.Protection Shrine You reflect all incoming damage. Damage reflected scales with Level and Difficulty.On top of these powerfully amplified effects, Miserly Spirits spawn immediately when a Lunar Shrine is activated, allowing you to immediately capitalize on the Shrines specific gameplay augmentation.Conquer Reputation Levels to Earn RewardsThere are 10 Ancestral Favor Reputation Levels in total to earn with rewards such as a Resplendent Spark, and 6 different Lunar-themed cosmetic rewards to unlock, including the new TragOuls Consort mount*. Once youve climbed through the reputation levels, continue earning additional rewards in the form of Unceasing Gifts from the Ancestors.*Previous Lunar rewards such as the Lunar Scepter, Dragons Courage, Moonshot Bow, Dragons Tapestry, Moons Bounty, and TragOuls Consort cosmetics can only be earned once.Opulent Garments AwaitLunar Awakening themed garments will also be available in Tejals shop. Head over to adorn your wanderer in elegant Armor Cosmetics like the Scholar of the Lonely Moon for the Sorcerer, and spruce up your trusty steed with the Dragon of the Lonely Moon mount armor.Celebrate with Ying-Yue and bask in the soft glow of Lunar Awakening when it arrives on February 4!0 Comments ·0 Shares ·29 Views
-
M4 MacBook Air will be a no-brainer upgrade from M1, heres why9to5mac.comApple is expected to launch the M4 MacBook Air very soon. When it arrives, the device will offer a compelling upgrade for existing MacBook Air users, particularly anyone using the M1 model.Upgrading from M1 to M4 MacBook Air gets you a lotFor most users, buying every new laptop Apple makes isnt a good option. Year-over-year upgrades tend to be relatively minor. But waiting several years between upgrades? That can mean youre in for a real treat with a new MacBook.Heres a sampling of new features the M4 MacBook Air will provide compared to the M1 model:M4 performance boost: Per Apple, compared to M1 the M4 chip is up to 1.7x faster for daily tasks like browsing and app use, up to 2.1x faster for more demanding workflows like gaming and media editing, and its Neural Engine is over 3x fastergiving it much more headroom for AI.New design: This one will be a positive for most buyers (more on that shortly). The new, sleek design first introduced with the M2 model will provide a big visual difference if you buy an M4.Much better camera: The M2 and M3 Air upgraded the M1s 720p camera to 1080p, but the M4 is expected to go further with a new 12MP Center Stage camera with Desk View support.Extended battery life: The M4 chips efficiency results in battery improvements. With the iPad Pro, Apple shrunk the device while maintaining the same battery life. The MacBook Pro, meanwhile, offered extended battery life. Expect the Air to follow the MacBook Pros path.Multiple external displays: The M1 MacBook Air only lets you run a single external display, but the M4 will support two external monitors even with the lid open.Bigger, brighter, nano-textured screen: Back in the M1 era, there was only one 13.3-inch size option for the Air. Now, the small model has a 13.6-inch display and theres a 15.3-inch model too. The M4s display will also be brighter than the M1s, and likely include a nano-texture option.More RAM (probably): All new M4 Macs start at 16GB RAM and go up to a max of 32GB, so depending on your M1 specs, that could be a big upgrade.4-speaker sound system: I personally dont use built-in Mac speakers much, but if you do, youll get a 4-speaker array in the M4 Air that the M1 didnt have.MagSafe and port versatility: Apple continues to include just two USB-C ports on the MacBook Air. However, recent models have added MagSafe, which the M1 lacksthus enabling safer charging and freeing up a USB-C port.This isnt an exhaustive list, but it does make pretty clear how big of an upgrade the M4 model will be over the M1 MacBook Air.Two possible drawbacksI have to admit, its not necessarily all positive change. Depending on your preferences, there are two aspects of the M4 MacBook Airs design that could be negatives:the display notchand the new (M2-style) overall designWhile the notch on the iPhone never bothered me much, I find it much more of an eyesore on Macs. Im also very fond of the classic tapered MacBook Air design that the M1 had. The new design is nice, to be sure, but it doesnt evoke the same affection.M4 offers the most reasons to upgrade yetAnecdotally, Ive noticed that quite a few users of the M1 MacBook Pro upgraded to the M4 when it arrived last November, and were very happy with their choice.Similarly, I think the M4 MacBook Air will make an ideal upgrade for M1 users.The M1 is no slouch. I used an M1 Air myself until last year. But its been around for over four years nowa perfectly solid run.Youll notice that some of the upgrades I shared abovemany, in factalso come with the M3 or M2 models. But if youre the kind of person who only upgrades your laptop every 4+ years, youre best off getting the latest and greatest. Future-proofing matters, especially with a device you plan to hold on to a while.The M4 MacBook Air will be here soon, and by accumulating all the improvements of the M3 and M2, while adding its own, it will be the most compelling M1 upgrade yet.Do you plan to upgrade to the M4 MacBook Air from an M1? Let us know in the comments.Best Mac accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel0 Comments ·0 Shares ·28 Views
-
Netflix solved a major complaint about its iPhone and iPad app with one button9to5mac.comRejoice iPhone and iPad users, Netflix has added a new button that will surely save you time and improve your overall life.Thats right. A single tap to download a full season of a series. Just like a civilized society deserves (and something Android users have had for years). Welcome to an elevated existence, iPhone and iPad Netflix viewers.Today, were adding a much-requested Season Download button for all iPhone and iPad users. This feature, which Android users already know and love, allows you to download every episode in a season with just one tap.[]On a shows display page, look for the button right next to the Share option. Tap it, and the entire season will start downloading automatically no more downloading episodes one at a time! Want to keep tabs on your downloads or manage individual episodes? Head to the Downloads section under the My Netflix tab everything will be there.Those recent price increases are already paying off. Grab the latest Netflix app for iPhone and iPad from the App Store today. Happy streaming.Top iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel0 Comments ·0 Shares ·28 Views