• UNITY.COM
    Get the most out of the VFX Graph in Unity 6 with our updated e-book for artists
    To orchestrate their creations, visual effects artists need a sophisticated understanding of shape, lighting effects, color, volume of particle effects, speed of movement, and timing.The VFX Graph is Unity’s node-based visual logic system for creating visual effects in games. It provides all the capabilities you need to create GPU-accelerated visual effects.Now, you can get our updated e-book on creating effects with the VFX Graph in Unity 6. This new edition will guide you to achieving the best visual quality and performance for your effects for your Unity 6 productions.The new features in VFX Graph for Unity 6 include profiling tools, six-way lighting for smoke effects, new learning templates, and more. These are part of a wide-ranging collection of new rendering and graphics features in Unity 6, like performance enhancements for URP and HDRP, potential reductions in CPU and GPU workload, and new optimization options.And there’s more. Together with the Unity 6 VFX Graph e-book you can also watch a new video tutorial that explores the VFX Graph Learning Templates:The VFX e-book includes key sections like:A detailed introduction to graph logic and all of the parts that make up a graphWorking with VFX Graph in URP and HDRPExplanations of many different types of visual effects examplesHow to create interactivityUsing VFX Graph and Shader Graph together for advanced shader effectsPipeline tools to use with VFX GraphProfiling, debugging, and optimization featuresTechniques for advanced creatorsLet’s look in brief at some of the great new content in the guide.UI improvementsCreating nodes or blocks now uses a hierarchical tree view, making it easier to browse the node library. Enhancements include custom colors and a favorites folder for a more efficient and personalized search experience. You can also use the advanced search filtering to select from the available nodes.New VFX ToolbarThe VFX Toolbar has been simplified and includes new options for quick access to documentation and samples.Keyboard shortcutsThe Shortcut Manager has a VFX Graph category that lets you modify the shortcut command available in the Visual Effect Graph window.The VFX Graph Learning Templates is a collection of numerous different samples that help you explore a specific aspect and feature set of VFX Graph, and showcases many VFX techniques. The sample content is compatible with both URP and HDRP projects, for VFX Graph in Unity 6 and later.The sample graphs are small and focused, making them ideal learning resources. Dive into each template to master a new technique or use it as a starting point for your own effect. Each graph comes with detailed notes to help you understand their construction.You’ll find samples that cover:Graph fundamentalsParticle orientation and rotationTexturing and flipbooksParticle pivotsMesh and texture samplingCollisions and interactivityDecal particlesParticle stripsA new section in the guide explains how to create six-way lighting, a method for smoke rendering based on baked simulations that works well across different lighting conditions. It can approximate the volumetric feel of smoke with a cost-effective process. Six-way lighting can be a useful technique in your effects toolkit, balancing visual quality, performance, and memory usage for rendering real-time smoke effects.You can also watch VFX Graph: Six-way lighting workflow for a complete walkthrough of the technique and read this blog post for more information.One of the key advantages of Shader Graph integration is the ability to drive shader behavior on a per-particle level. This allows for creating variations, color randomization, and other dynamic effects with different per-particle values, enabling highly complex visuals.The e-book now includes a bigger section using examples from the Shader Graph Feature Examples sample content. This is a collection of Shader Graph assets that demonstrate how to achieve common techniques and effects in Shader Graph. The goal of this sample pack is to help users see what is required to achieve specific effects and provide examples to make it easier to learn.Finally, VFX Graph in Unity 6 also includes integration with Shader Graph keywords. This allows you to create one Shader Graph for use in multiple VFX Graphs.Unity 6 includes Profiling and Debug panels that provide essential information about your visual effects. These tools can provide information such as CPU and GPU timings, memory usage, texture usage, and various states. Use them to monitor and optimize performance for your VFX Graphs.Seasoned VFX artists and developers can take advantage of the Custom HLSL Block. This feature allows you to create unique effects that may not yet be natively supported in Unity. With Custom HLSL, you can create advanced physics simulations, flocking behaviors, or real-time data visualizations.Custom HLSL nodes allow you to execute custom HLSL code during particle simulation. You can use an Operator for horizontal flow or a Block for vertical flow within Contexts.Along with the VFX Graph e-book you can access other great resources that provide know-how for how to create graphics and effects that boost the atmosphere, fun, and excitement for your 2D and 3D games. Here are a few to check out:Unity 6 graphics learning resourcesIntroduction to the Universal Render Pipeline for advanced Unity creatorsCreate 2D special effects with VFX Graph and Shader GraphFind a treasure trove of lighting and visual effects in Gem Hunter Match
    0 Yorumlar 0 hisse senetleri 40 Views
  • UNITY.COM
    Games made with Unity: November 2024
    November was packed with game releases and some pretty sizable updates, including the new Undead update for DOTS-powered Diplomacy is Not an Option from our friends at Door 407. Want to use mountains of corpses as barriers? Now, you can!Steam curator list: Better TogetherWith many families coming together for the holiday season soon, we thought it would be a good time to gather up a list of Made with Unity games you can play with others. We posted up our poll and Better Together co-op games came out on top, check out the list and follow our Steam Curator page.Award seasonAs we head into the next few months of the major gaming awards, we want to congratulate some of the Made with Unity winners of the Golden Joysticks:Still Playing Award (Mobile) - Honkai: Star RailBest Indie Game (Self-published) - Another Crab's TreasureBest Early Access Game - Lethal CompanyNext up are The Game Awards with many of your games up for awards there, follow along on our social channels to celebrate the winners.Working on a game in Unity? We’d love to help you spread the word. Be sure to submit your project. Without further ado, to the best of our abilities, here’s a non-exhaustive list of games made with Unity and launched in November of 2024, either into early access or full release. Add to the list by sharing any that you think we missed.Bullet heavenTemtem: Swarm, Crema, GGTech Studios (November 13 – early access)Card games and deckbuildersMenace from the Deep, Flatcoon (November 11)Casual and partyDEATH NOTE Killer Within, Grounding Inc. (November 5)Bounce Arcade, Velan Studios (November 21)ComedyGreat God Grove, LimboLane (November 15)HorrorSorry We're Closed, à la mode games (November 14)Absolute Insanity, Chris Danelon (November 5)Angel Wings: Endless Night, RumR Design (November 6)Is this Game Trying to Kill Me?, Stately Snail (November 13)Enigma of Fear, Dumativa, Cellbit (November 28)FPS420BLAZEIT 2: GAME OF THE YEAR -=Dank Dreams and Goated Memes=- [#wow/11 Like and Subscribe] Poggerz Edition, Normal Wholesome Games (November 14)Narrative and mysteryChicken Police: Into the HIVE!, The Wild Gentlemen (November 7)Deathless Death, Dream Delivery Center (November 13)Loco Motive, Robust Games (November 21)Mercury Abbey, YiTi Games (November 22)PlatformerMind Over Magnet, Game Maker's Toolkit (November 13)Management and automationTechtonica, Fire Hose Games (November 7)MetroidvaniaLast Vanguard, Cool Tapir Studios LLC (November 5 – early access)Roguelike/liteVoid Crew, Hutlihut Games (November 25)Elin, Lafrontier (November 1 – early access)Munch, Mac n Cheese Games (November 4)ShapeHero Factory, Asobism.Co.,Ltd (November 5 – early access)Ammo and Oxygen, Juvty Worlds (November 7 – early access)Atomic Picnic, BitCake Studio (November 7 – early access)Shape of Dreams: Prologue, Lizard Smoothie (November 12)Dungeon Clawler, Stray Fawn Studio (November 21 – early access)RPGVoid Sols, Finite Reflection Studios (November 12)Metal Slug Tactics, Leikir Studio (November 5)ATLYSS, Kiseff (November 22 – early access)Neon Blood, ChaoticBrain Studios (November 26)Puzzle adventureLittle Big Adventure – Twinsen’s Quest, [2.21] (November 14)SimulationMirthwood, Bad Ridge Games (November 6)Dustland Delivery, Neutron Star Studio (November 5)Everholm, Chonky Loaf (November 11)Luma Island, Feel Free Games (November 20)StrategySongs of Silence, Chimera Entertainment (November 13)Sainthood, Bisong Taiwo (November 1)Skill Legends Royale, ZGGame (November 4)Lost Eidolons: Veil of the Witch, Ocean Drive Studio, Inc. (November 5 – early access)Tower Factory, Gius Caminiti (November 7 – early access)SurvivalI Am Future: Cozy Apocalypse Survival, Mandragora (November 13)That’s a wrap for November 2024. Want more Made with Unity and community news as it happens? Don’t forget to follow us on social media: Bluesky, X, Facebook, LinkedIn, Instagram, YouTube, or Twitch.
    0 Yorumlar 0 hisse senetleri 42 Views
  • TECHCRUNCH.COM
    Windsurf slashes prices as competition with Cursor heats up
    AI coding assistant startup Windsurf cut its prices “across the board” it announced on Monday, touting “massive savings” for its users as competition with its rival Cursor intensifies. Windsurf said it’s getting rid of its complex system of “flow action credits,” which charged developers for actions its AI did in the background. It’s also cutting prices for its team plans to $30 per user per month down from $35, while making its enterprise plans “much cheaper,” per the announcement. Windsurf product marketer Rob Hou proclaimed on X that Windsurf now has “BY FAR the best and most affordable pricing structure of all AI coding tools on the market,” crediting this to Windsurf optimizing its GPU usage.  Hou criticized “confusing” competitor plans priced at $20 a month in an apparent dig at Cursor’s individual monthly plan, which starts at $20 compared to Windsurf’s $15. The pricing overhaul comes as Windsurf is reportedly being considered for an acquisition by OpenAI for $3 billion (Cursor’s creator Anysphere is in talks to raise at a $10 billion valuation). As TechCrunch previously reported, Windsurf is the smaller of the two coding assistant startups, generating about $100 million in ARR compared to Cursor’s $300 million. OpenAI originally wanted to buy Cursor, but it’s growing so quickly that it’s not in the market to be sold. Although Windsurf hasn’t confirmed the OpenAI acquisition reports, it has recently stepped up its public collaborations with OpenAI. For example, Windsurf’s CEO Varun Mohan appeared earlier this month in OpenAI’s launch video for its latest API model family. And as part of the pricing change announcement, Windsurf is lavishing its users with another week of free and unlimited usage of OpenAI’s latest GPT-4.1 and o4-mini models.  The big question is whether Cursor ends up cutting its own prices in response to Windsurf’s revamp. That might risk a price war, making it harder for both startups to scale up profitably. Windsurf, which declined to comment for this article, said in its announcement that it’s continuing to deliver on its promise “from the very beginning” to pass savings back to its users.  Cursor creator Anysphere didn’t respond to a request for comment. 
    0 Yorumlar 0 hisse senetleri 47 Views
  • TECHCRUNCH.COM
    OpenAI seeks to make its upcoming open AI model best-in-class
    Toward the end of March, OpenAI announced its intention to release its first “open” language model since GPT‑2 sometime this year. Now details about that model are beginning to trickle out from the company’s sessions with the AI developer community. Sources tell TechCrunch that Aidan Clark, OpenAI’s VP of research, is leading development of the open model, which is in the very early stages. OpenAI is targeting an early summer release and aims to make the model — a reasoning model along the lines of OpenAI’s o-series models — benchmark-topping among other open reasoning models. OpenAI is exploring a highly permissive license for the model with few usage or commercial restrictions, per TechCrunch’s sources. Open models like Llama and Google’s Gemma have been criticized by some in the community for imposing onerous requirements — criticisms OpenAI is seemingly seeking to avoid. OpenAI is facing increasing pressure from rivals such as Chinese AI lab DeepSeek that have adopted an open approach to launching models. In contrast to OpenAI’s strategy, these “open” competitors make their models available to the AI community for experimentation and, in some cases, commercialization. It has proven to be a wildly successful strategy for some outfits. Meta, which has invested heavily in its Llama family of open AI models, said in early March that Llama has racked up over 1 billion downloads. Meanwhile, DeepSeek has quickly amassed a large worldwide user base and attracted the attention of domestic investors. Sources tell TechCrunch that OpenAI intends for its open model, which will be “text in, text out,” to run on high-end consumer hardware and possibly allow developers to switch its “reasoning” on and off, similar to reasoning models recently released by Anthropic and others. (Reasoning can improve accuracy, but at the cost of increased latency.) If the launch is well-received, OpenAI may follow it up with additional models — potentially including smaller models. In previous public comments, OpenAI CEO Sam Altman said he thinks OpenAI has been on the wrong side of history when it comes to open sourcing its technologies. “[I personally think we need to] figure out a different open source strategy,” Altman said during a Reddit Q&A in January. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.” Altman has also said that OpenAI’s upcoming open model will be thoroughly red-teamed and evaluated for safety. Sources tell TechCrunch that the company intends to release a model card for the model — a thorough technical report showing the results of OpenAI’s internal and external benchmarking and safety testing. “[B]efore release, we will evaluate this model according [to] our preparedness framework, like we would for any other model,” Altman said in a post on X last month. “[A]nd we will do extra work given that we know this model will be modified post-release.” OpenAI has raised the ire of some AI ethicists for reportedly rushing safety testing of recent models and failing to release model cards for others. Altman also stands accused of misleading OpenAI executives about model safety reviews prior to his brief ouster in November 2023. We’ve reached out to OpenAI for comment and will update this piece if we hear back.
    0 Yorumlar 0 hisse senetleri 57 Views
  • VENTUREBEAT.COM
    Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This decision enabled reproducibility and consistent baseline comparisons across symbolic tasks. Here’s how they did it and what they found: Wang summarized the core challenge in a widely shared X thread: Why does your RL training always collapse? According to the team, LLM agents initially generate symbolic, well-reasoned responses. But over time, RL systems tend to reward shortcuts, leading to repetitive behaviors that degrade overall performance—a pattern they call the “Echo Trap.” This regression is driven by feedback loops where certain phrases or strategies earn high rewards early on, encouraging overuse and stifling exploration. Wang notes that the symptoms are measurable: reward variance cliffs, gradient spikes, and disappearing reasoning traces. RAGEN test environments aren’t exactly enterprise-grade To study these behaviors in a controlled setting, RAGEN evaluates agents across three symbolic environments: Bandit: A single-turn, stochastic task that tests symbolic risk-reward reasoning. Sokoban: A multi-turn, deterministic puzzle involving irreversible decisions. Frozen Lake: A stochastic, multi-turn task requiring adaptive planning. Each environment is designed to minimize real-world priors and focus solely on decision-making strategies developed during training. In the Bandit environment, for instance, agents are told that Dragon and Phoenix arms represent different reward distributions. Rather than being told the probabilities directly, they must reason symbolically—e.g., interpreting Dragon as “strength” and Phoenix as “hope”—to predict outcomes. This kind of setup pressures the model to generate explainable, analogical reasoning. Stabilizing reinforcement learning with StarPO-S To address training collapse, the researchers introduced StarPO-S, a stabilized version of the original framework. StarPO-S incorporates three key interventions: Uncertainty-based rollout filtering: Prioritizing rollouts where the agent shows outcome uncertainty. KL penalty removal: Allowing the model to deviate more freely from its original policy and explore new behaviors. Asymmetric PPO clipping: Amplifying high-reward trajectories more than low-reward ones to boost learning. These changes delay or eliminate training collapse and improve performance across all three tasks. As Wang put it: “StarPO-S… works across all 3 tasks. Relieves collapse. Better reward.” What makes for a good agentic AI model? The success of RL training hinges not just on architecture, but on the quality of the data generated by the agents themselves. The team identified three dimensions that significantly impact training: Task diversity: Exposing the model to a wide range of initial scenarios improves generalization. Interaction granularity: Allowing multiple actions per turn enables more meaningful planning. Rollout freshness: Keeping training data aligned with the current model policy avoids outdated learning signals. Together, these factors make the training process more stable and effective. An interactive demo site published by the researchers on Github makes this explicit, visualizing agent rollouts as full dialogue turns—including not just actions, but the step-by-step thought process that preceded them. For example, in solving a math problem, an agent may first ‘think’ about isolating a variable, then submit an answer like ‘x = 5’. These intermediate thoughts are visible and traceable, which adds transparency into how agents arrive at decisions. When reasoning runs out While explicit reasoning improves performance in simple, single-turn tasks like Bandit, it tends to decay during multi-turn training. Despite the use of structured prompts and  tokens, reasoning traces often shrink or vanish unless directly rewarded. This points to a limitation in how rewards are typically designed: focusing on task completion may neglect the quality of the process behind it. The team experimented with format-based penalties to encourage better-structured reasoning, but acknowledges that more refined reward shaping is likely needed. RAGEN, along with its StarPO and StarPO-S frameworks, is now available as an open-source project at https://github.com/RAGEN-AI/RAGEN. However, no explicit license is listed in the GitHub repository at the time of writing, which may limit use or redistribution by others. The system provides a valuable foundation for those interested in developing AI agents that do more than complete tasks—they think, plan, and evolve. As AI continues to move toward autonomy, projects like RAGEN help illuminate what it takes to train models that learn not just from data, but from the consequences of their own actions. Outstanding Questions for Real-World Adoption While the RAGEN paper offers a detailed technical roadmap, several practical questions remain for those looking to apply these methods in enterprise settings. For example, how transferable is RAGEN’s approach beyond stylized, symbolic tasks? Would businesses need to design entirely new environments and reward functions to use this system in workflows like invoice processing or customer support? Another critical area is scalability. Even with the enhancements provided by StarPO-S, the paper acknowledges that training still eventually collapses over longer horizons. This raises the question: is there a theoretical or practical path to sustaining reasoning over open-ended or continuously evolving task sequences? At the time of writing, no explicit license is listed in the RAGEN GitHub repository or documentation, leaving open questions about usage rights. To explore these and other questions—including how non-technical decision-makers should interpret RAGEN’s implications—I reached out to co-author Wang for further insight. At the time of writing, a response is pending. Should any comments arrive, they will be included in a follow-up to this article or integrated as an update. RAGEN stands out not just as a technical contribution but as a conceptual step toward more autonomous, reasoning-capable AI agents. Whether it becomes part of the enterprise AI stack remains to be seen, but its insights into agent learning dynamics are already helping redefine the frontier of LLM training. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Yorumlar 0 hisse senetleri 60 Views
  • VENTUREBEAT.COM
    The Political Machine 2024 update includes tariffs, new demographics and more
    The Political Machine is one of those games that never has to end, given that its fuel is interest in U.S. presidential politics.Read More
    0 Yorumlar 0 hisse senetleri 60 Views
  • VENTUREBEAT.COM
    Google adds more AI tools to its Workspace productivity apps
    Google expanded Gemini's features, adding the popular podcast-style feature Audio Overviews to the platform.Read More
    0 Yorumlar 0 hisse senetleri 60 Views
  • VENTUREBEAT.COM
    Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
    RAGEN stands out not just as a technical contribution but as a conceptual step toward more autonomous, reasoning-capable AI agents.Read More
    0 Yorumlar 0 hisse senetleri 61 Views
  • WWW.THEVERGE.COM
    Nvidia’s AI assistant on Windows now has plugins for Spotify, Twitch, and more
    Nvidia is updating its G-Assist AI assistant on Windows to take it beyond optimizing game and system settings. G-Assist originally launched last month as a chatbot primarily focused on improving PC gaming, but it’s now getting plugin support so you can extend the AI assistant to control Spotify, check if a streamer is live on Twitch, and look at stock or weather updates. The new ChatGPT-based G-Assist plugin builder lets developers and enthusiasts create custom functionality for Nvidia’s AI assistant. G-Assist will be able to connect to external tools and use APIs to expand the capabilities of what Nvidia offers right now. Nvidia has published sample plugins on GitHub that can be compiled and used by G-Assist: Spotify — hands-free music and volume control Google Gemini — allows G-Assist to invoke Gemini for cloud-based complex conversations Twitch —  you can use this plugin to checks if a streamer is live with voice commands like, “Hey, Twitch, is [streamer] live?”  Peripheral Controls — change RGB lighting or fan speed on Logitech G, Corsair, MSI and Nanoleaf devices Stock Checker —  provides real-time stock prices Weather Updates — provides current weather conditions in any city These plugins all run locally using a small language model on Nvidia’s RTX GPUs, and developers will also be able to share their own custom plugins through GitHub. G-Assist uses a local small language model that requires nearly 10GB of space for the assistant functions and voice capabilities. The AI assistant works on a variety of RTX 30-, 40-, and 50-series desktop GPUs, but you’ll need a card with at least 12GB of VRAM. If you’re interested in trying out G-Assist or building a plugin, the app is available as an optional part of Nvidia’s main app for Windows.
    0 Yorumlar 0 hisse senetleri 52 Views
  • An Existential Crisis of a Veteran Researcher in the Age of Generative AI
    I was a researcher fifteen years ago. A PhD candidate doing Research for long days. I was swamped with many articles, annotations, emails, bookmarks, etc. When I found a citation manager tool, Mendeley, I felt so relaxed. It was like I had control over the process again. When I found a bookmark manager, XBookmark, I felt so productive (still have the bookmarks). They worked well for me at that time, and I finished my PhD program and got my degree. Episode 1 — Facing the Reality These days, I have an existentialist crisis. I see how AI research assistant tools have progressed. I was working with Scinito the other day, and I was shocked. I was trying to convince myself that this is only a tool to help in the literature review. Those who did a PhD know how tough it is to do a solid literature review. It is not a joke. You have to read over 100 articles, categorize them, understand them, and summarize them. If I say it took 3-6 months to do a solid literature review 15 years ago, I was not wrong. You heard me right, 3-6 months of your valuable lifetime. At first, I tried to convince myself that Scinito, or other similar tools, offer a marginal value to researchers. Sadly, or happily, I was wrong …  Not only can they do a literature review for you in a minute (I am sorry, my fellows, you heard it right), but they can peer review your articles. I would never forget how much I should have waited for my mentors and advisors to review my article, how many back-and-forth emails we had till things got up to an acceptable quality. Even after all these efforts, you get extensive feedback from peer reviewers in a journal to consider your article for publication. Or, you got rejected after 3 or 6 months only due to choosing the wrong journal to publish. These AI research assistant tools can enhance all these steps: reviewing your articles and selecting the most relevant journal for you.  Amazing. It is indeed amazing for researchers in this era, but I feel sad to see how much time I have spent on something that could have been done much easier and much faster. The interesting part is that this is not the end of the season. It is just the beginning.  This challenge is not just for researchers; it is also relevant to software developers. Tools like Cursor IDE have transformed how we build software dramatically. After my PhD, I started my career in engineering. So, I did lots of coding, testing, and so on. Today, I don’t need to read Stack Overflow to debug my code. I don’t need to spend time writing tests for my code. I no longer need to be an expert in React, HTML, or CSS to build a website. How much time did I spend on building sites in the past? I don’t want to think about it!  Episode 2 — Embracing the Reality Let me share the full half of the glass experience as well. This is super cool. I can ask AI research assistant tools to perform a semantic search within an extensive database. Something that we couldn’t have done before. It was just keyword matching. I could get updates about any topics or research questions in hours by reading the literature review that AI generates in seconds. I can write LaTeX code easily. I can reformat my paper to any guideline in minutes.  I am happy for researchers of our time. They can spend more time on creativity, problem solving, and, of course, their valuable life rather than doing unnecessary time-consuming tasks.  I am also happy for myself. I can write code in any programming language that I want. I can build websites without getting locked to Wix or WordPress. I can write any Python code that I need. I can optimize it and write a series of tests for it. WoW! It is super cool. The landscapes of coding, designing, researching, and everything are evolving fast. No matter how much people or organizations resist, Technology will find its way.  There is a catch here. The promise of building a website with one (and only one) prompt is not correct. I am telling this based on a very current experience. These days, I am working on a new website with a colleague, both of us are experts in software and AI. We didn’t even think about Wix or WordPress this time. We started using Curosr and its Agent experience with Claude-3.7-sonnet. The Cursor’s agent can generate the website structure in a second, but it fails when it comes to details. For example, when you want to align two different texts with each other, especially when one of them is static and one of them is dynamic, the AI can’t do it right. Basically, AI can do the website structure in a second, but can’t do the required details (details that you want to apply as a human on top of the prebuilt structure) as well as a UI design expert. That means, even though we don’t need to be experts in React or CSS, we must know the basics to intervene in the codebase when needed. Plus, we must know the concepts well enough to elaborate on them. If you can’t say it, AI can’t make it! I am not shocked by this weakness of AI models. They are built on the “wisdom of crowds” principle. It means they are built based on aggregating what’s most common, not emulating the intuition of an individual expert. This is rooted in their fundamentals. They are amazing in generality but suffer in specificity. In this short podcast, I explained a similar concept from a different angle: “The Erosion of Specificity.” Last Words I have been lucky to be part of the AI community. I am an AI architect with a solid plan to embrace this technology shift. But I am concerned about many other folks who can’t manage this change. This is not easy at all. If you have had an existential moment in your career due to AI, let me know. I may know something that helps. If I could share one tip here, it would be to “Learn the fundamentals, deeply.” You can (should) leave the repetitive, high-level, and generic tasks to AI, and spend your human creativity and expertise on the details to make your work/product shine. Follow me on YouTube if you want to hear more stories from the perspective of an AI architect: youtube.com/@AIUnorthodoxLessons/ The post An Existential Crisis of a Veteran Researcher in the Age of Generative AI appeared first on Towards Data Science.
    0 Yorumlar 0 hisse senetleri 64 Views