• Experts Are Unraveling the Mysteries of This Breathtaking 2,000-Year-Old Mosaic Depicting Alexander the Great in Battle
    www.smithsonianmag.com
    The mosaic depicts Alexander the Great fighting in the Battle of Issus. Marco Cantile / LightRocket via Getty ImagesA famous Roman mosaic depicting Alexander the Great is revealing new insights into antiquity. As part of an ongoing restoration, researchers have learned that the artworks stones came from quarries across Europe and North Africa.The 2,000-year-old mosaic comes from the ruins ofPompeii, the ancient Roman city that was buried in volcanic ash in 79 C.E. after Mount Vesuvius eruption. Archaeologists found the artwork in the floor of an extravagant mansion in Pompeii known as theHouse of the Faun in 1831. About a decade later, it was moved to theNational Archaeological Museum of Naples, where its been housed ever since.Per themuseum, the fragile mosaic has been undergoing a long, complex conservation process since 2020. The first phase focused on examining it using noninvasive methods, including videomicroscopy,infrared thermography and portable X-ray fluorescence. According to a study published this week in the journal PLOS One, researchers identified ten colors oftesserae in the mosaicincluding shades of red, yellow, green, blue, pink, white, black, gray and brownas well as a variety of micro-textures that were masterfully combined to enhance artistic effects. The mosaic was found in a mansion encased in ash in Pompeii. National Archaeological Museum of NaplesThe mosaic depicts a battle: Surrounded by a mess of fighting cavalry, Alexander wields a long spear. Opposite him is another leader often identified as the Persian kingDarius III. The mosaic probably depicts the Battle of Issus in 333 B.C.E., in which Alexander faced off against the Persian leader and emerged victorious. From Persia, Alexander continued conquering eastward. By the end of his life in 323 B.C.E., the Macedonian king had secured an empire thatstretched from the Mediterranean to modern-day Pakistan.The Alexander Mosaic is one of the most impressive artworks of antiquity by any standard and the most important mosaic of the Roman age, write the researchers in the study. The image of Alexander depicted in the central scene of the mosaic is perhaps the most iconic and well-known representation of his face in ancient art. The researchers used multispectral imaging to examine the mosaic. Balassone G, Cappelletti P, De Bonis A, De Simone A, Di Martire D, et al.Mosaics were a flourishing art form in the Roman Empire, where artisans pioneered the incorporation of tesserae (cubes of stone, ceramic and glass). Today, they are among the best-preserved pieces of Roman art.The researchers found that the mosaics creators paid particular attention to Alexanders face, according toLive Sciences Laura Geggel. His visage is made up of several different hues of pink tesserae, each with their own luminescence effects, per the study. This variation is probably related to the stones unique chemical compositions.On the mosaics surface, experts discovered natural wax and gypsum, which were probably left over from previous conservation efforts. The researchers sorted the mosaics tesserae material into four key groups: vitreous (glass-like), calcium carbonate-based, silicate-based and a combination of the latter two. Based on similarities between the tesserae and mining areas around the Mediterranean region, they say the rocks could have come from Italy, Greece, the Iberian Peninsula and Tunisia. Researchers are currently analyzing and restoring the mosaic. National Archaeological Museum of NaplesSome of the white tesserae resembleMarmor Lunensisa marble extracted from quarries in theApuan Alps in Italy, which Romans mined between the first century B.C.E. and the third century C.E. The pale pink stones may be Breccia Nuvolata marble, found all around the Mediterranean, while the darker pinks may be Marmo Rosa, which comes from Portugal.The restoration process is still ongoing. As the study authors write, The combination of these new data, along with information obtained from a new instrumental investigation campaign planned for the mosaic surface in the final phases of the restoration operations, will further enrich our knowledge of this superlative work of ancient art.Get the latest stories in your inbox every weekday.Filed Under: Ancient Civilizations, Ancient Rome, Art, Art History, Arts, Italy, New Research, Pompeii, Roman Empire, Warfare
    0 Commentarii ·0 Distribuiri ·32 Views
  • DeepMinds new inference-time scaling technique improves planning accuracy in LLMs
    venturebeat.com
    With "Mind Evolution" LLMs can use search and genetic algorithms to generate and combine different solutions and find the optimal one.Read More
    0 Commentarii ·0 Distribuiri ·33 Views
  • OpenAI Stargate is a $500B bet: Americas AI Manhattan Project or costly dead end?
    venturebeat.com
    OpenAI, Oracle, Softbank and MGX are investing a record amount in new AI infrastructure even as China's DeepSeek outperforms on cost.Read More
    0 Commentarii ·0 Distribuiri ·35 Views
  • Insomniac Games CEO and founder Ted Price to retire in March
    www.gamedeveloper.com
    Justin Carter, Contributing EditorJanuary 22, 20252 Min ReadImage via PlayStation/Insomniac Games.At a GlanceHaving felt 'incredibly fortunate' to lead Insomniac since its inception, Price is stepping down to let other devs lead it.Ted Price, longtime CEO of PlayStation studio Insomniac Games, is retiring from the games industry in March.In a statement, Price explained he "felt it was simply time to step aside and let others pave the way for [the] team.""Last week, I felt comfortable announcing to the Insomniac team that after having been incredibly fortunate to enjoy such a fulfilling career in games, Ill be departing," he added.Price founded Insomniac in 1994, when it was originally called Xstreme Software for its first year. After releasing its debut game, 1996's Disruptor, Insomniac went on to create several key franchises for the PlayStation brand, including Spyro the Dragon, Ratchet & Clank, and Marvel's Spider-Man. Sony fully acquired the studio for $229 million in 2019.His exit is the latest high-profile departure for PlayStation. Sony Interactive Entertainment president Jim Ryan retired after his own 30-year tenure in March 2024, and fellow executive Shuhei Yoshida officially left the company in January 2025.The future of Insomniac GamesAfter Price leaves in March, Insomniac will be run by a trio of co-heads: CFO Jen Huang, brand/leadership head Ryan Schneider, and creative head Chad Dezern. All three have been with Insomniac for at least a decade or more. Price wrote that they were "intimately familiar with how we do things...and have earned people's trust.""For many years, Chad, Jen and Ryan have been instrumental in making Insomniac what we are today," Price continued. "Theyve consistently demonstrated the kind of collaboration and transparency thats part of our DNA. And just as important, their skillsets are truly complementary.[...] Im confident that under the combined leadership of Chad, Jen and Ryan, Insomniac will continue to deliver the industry-defining games that players have come to expect from uswhile making a positive and lasting impact on peoples lives for decades to come."Speaking to the studio's future, Price said Insomniac is "fully focusedon building games for our fans" after a challenging 2024, when the studio laid off workers and suffered the leak of staff information and details on future projects after being hacked. Despite that, he said the company "is in one of the strongest positions weve experienced in years, with each game in development looking beautiful and playing fantastic.""I want to thank every Insomniac for having a positive and lasting impact on my life," concluded Price. "Working side by side with Insomniacs for so many years has been a gift that Ill cherish for the rest of my days.[...] Thank you to Insomniacs, to our players and to videogames for 30 wonderful years."Read more about:[Company] PlayStationTop StoriesAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commentarii ·0 Distribuiri ·33 Views
  • Samsungs S25 and S25 Plus offer more of the same
    www.theverge.com
    If the Galaxy S24 series heralded the triumphant arrival of Galaxy AI, then the S25 and S25 Plus may be a bit of a comedown: they promise more AI thats smarter and sometimes slightly faster. Youd better like it because thats pretty much all youre gonna get.Samsung changed as little as it could on the Galaxy S25 and S25 Plus, announced today alongside the larger and redesigned Galaxy S25 Ultra. Theres the obligatory jump to a new chipset in this case, Qualcomms custom-tuned Snapdragon 8 Elite for Galaxy, included in phones worldwide this time around and a welcome decision to offer 12GB of RAM as standard on every S25 phone, pulling the base model in line with the others.Samsung hasnt changed the look of the Galaxy S25 and S25 Plus.The displays are the same as last year: 6.2 inches on the S25 and 6.7 inches on the S25 Plus, peaking at 2,600 nits of brightness and 120Hz refresh rate. The cameras are identical, too. Theres a 50-megapixel main camera, an ultrawide, and a 3x telephoto, with a familiar 12-megapixel selfie shooter on the front.If you were to upgrade from last years Galaxy S24 Plus to this years model, the only spec that would change is the chipset. Well, that and the fact that the new phones are Qi2 Ready they dont have the magnets that Qi2 certification requires, but theyll charge at up to 15W on a Qi2 charger when paired with Samsungs official Qi2 Ready magnet cases.Samsung hasnt changed the camera hardware at all from the S24 and S24 Plus, though the thick black bezel is new.Both S25 phones are thinner than their predecessors.Perhaps Im being a little unfair. Samsung hasnt increased its prices at least the S25 starts at $799.99 and the Plus model at $999.99, with preorders open now ahead of a full launch on February 7th. Its also maintaining its promise of seven generations of Android updates and seven years of security support. Both phones are lighter than their predecessors and almost half a millimeter thinner. That should ease the disappointment of anyone whos been hoping for the launch of the rumored S25 Slim, which is now tipped not to launch in the US at all. But its still hard to avoid the inevitable conclusion: this year is a software update, not a hardware one.The new Galaxy phones are awash with AI-branded features which Samsung says remain free to use this year, though its plans are unclear beyond that. Plenty of them have been here since last year, like Googles Circle to Search or generative photo editing tools that let you draw elements into photographs or remove distracting people and objects. Those now generate better results in less time, helped by improvements in AI models and the move to the Snapdragon 8 Elite, which handles more AI processing on-device, including previously cloud-based tasks like Generative Edit.Audio Eraser is a built-in tool for video editing that lets you remove or reduce video noise across specific categories think voices, music, wind, crowds to focus on whichever sounds you care about. It works well, but its only new to Samsung: Google Pixel phones have been able to do the same thing through Audio Magic Eraser since the Pixel 8.AI Select replaces Smart Select in the Edge Panel menu.Other AI abilities are just as familiar, but we didnt always call them AI. Take AI Select, accessed from Samsungs Edge Panel, which gives suggested actions like cropping and sharing screenshots, creating GIFs from videos, or adding events to your calendar. It replaces Smart Select, which did most of that, too, but with a different design.The S25 phones also offer a daily summary called Now Brief that lets you know whats on your calendar for the day or how your commute looks, bringing us back full circle to 2012s Google Now. Meanwhile, the Now Bar is Samsungs answer to Apples Dynamic Island: a lockscreen element that can show sports scores and Google Maps navigation instructions or tell you what song is playing. It sounds useful, but is it AI? Apple didnt think so.Gemini is now the default AI assistant on the phones. RIP Bixby.Some of the new features represent more meaningful progress. The phones AI assistant which is now based on Google Gemini by default, with Samsungs own Bixby relegated to access through its app can control your phone with natural language requests. Ask it to make text bigger or find photos from your last holiday, and it should oblige. Gemini can now also work across multiple apps in a single interaction, though this upgrade isnt exclusive to Samsung. It might look up a good restaurant and share it with your friend or pull up sports fixtures and add them to your calendar.The problem for me is that most of these features are hard to test in-depth when youre at a launch event using a phone that isnt yours, has few apps installed and no accounts signed in, and might have only been set up for the first time that morning. Well have a better sense of how effective Samsungs new AI features are when we can actually use the S25 and S25 Plus for an extended run in our review.The problem for Samsung is that, until then, its not clear what here should tempt anyone into upgrading. Many of these AI and software features are baked into One UI 7 itself and should roll out soon to owners of the S24 and older models. If the hardwares hardly changing, and the softwares coming to your phone anyway, whats the incentive to upgrade?Yesterday, my colleague Allison Johnson wrote that Samsung needs to give us a reason to care about new phones every year. On the strength of the S25 and S25 Plus, I think its fair to say that it hasnt.Photography by Dominic Preston / The Verge
    0 Commentarii ·0 Distribuiri ·31 Views
  • The Samsung Galaxy S25 Ultra smooths out some sharp edges
    www.theverge.com
    The Galaxy S25 Ultra, announced today, sheds more of its Note roots this year with rounded corners and flat edges that align it more with the rest of the S series. It comes with Qualcomms latest chipset, an upgraded ultrawide camera, and not much else, hardware-wise. With no price increase over last years model starting at $1,299 its a light refresh of Samsungs biggest phone, with a major emphasis on One UI 7.0s AI upgrades.Something about the shift from curved edges to flat sides makes the S25 Ultra look hefty in photos, like if the Cybertruck were a phone. But its actually slightly smaller and lighter than last years device, even with a bigger 6.9-inch screen thanks to slimmer bezels. Its equipped with a Snapdragon 8 Elite processor tuned for Galaxy devices thats true for all S25-series phones sold in all regions, which hasnt been the case recently. And it still comes with one more strong spec: seven years of OS updates and security patches. Samsung rounded out the pointy, uncomfortable corners on the S24 Ultra and flattened the edges. Photo: Allison Johnson / The VergeThere are some interesting things not on the Ultra this year, though. Bixby is no longer the default virtual assistant. Its still present and you can summon it through its own app. But Google Gemini will answer when you long-press the wake button on the side of the phone.The included S Pen, another holdover from the Note era, gets a bit of a downgrade. It no longer supports Bluetooth, so the air gesture controls that previous versions offered are gone. The S25 Ultras included S Pen is just a basic stylus, no magic wand tricks up its sleeve. Bummer.Camera hardware is largely unchanged from the previous model, except for a new 50-megapixel ultrawide, replacing a 12-megapixel module. Samsung claims that an upgrade to the S25s algorithmic image processing has improved detail in zoomed images. On the video side, Samsung now offers a Galaxy Log profile along with a custom LUT.Gemini is the new default assistant. Photo: Allison Johnson / The VergeThe most interesting changes are software-side in One UI 7.0. My colleague Dominic Preston has a good rundown of the new stuff as it also appears on the S25 and S25 Plus models. Unsurprisingly, it all has to do with AI, and much of it we were already familiar with thanks to the One UI 7.0 beta. But a couple of things made me sit up and pay attention.The first is the ability to use AI across apps to take action, like taking a picture of a flyer and having Gemini add the dates to your calendar and send your spouse an email about it. Maybe this doesnt sound like much, but some of us have to remember which day is crazy hair day at preschool, when conferences are, and the deadline for signing up for this seasons soccer class. A little help would be nice. This will first work across Google Workspace and Samsung native apps, with the addition of WhatsApp and Spotify. The other thing Im interested to see in action is suggested routines. In theory, the S25 phones will be able to notice if there are certain settings you tend to use at the same time every day or under certain conditions like turning Bluetooth on every time you get in the car and turning it off when you get out. When it sees a pattern, it should be able to suggest a routine to take care of those actions for you automatically. Youll be able to customize the routine parameters to your liking, but you wont have to go through the tedious work of setting it up from scratch. That could be cool!1/7 Photo by Chris Welch / The Verge1/7 Photo by Chris Welch / The VergeThe thing is, this stuff isnt exclusive to the S25 Ultra or even the S25 series. Samsung smartphone product manager Blake Gaiser told me Samsung will bring its new AI features to older devices where possible. The company certainly seems committed to delivering those updates to older phones but dont forget that they probably wont always be free.Well find out soon enough whether this is the AI update that will finally deliver on the promise of AI on our smartphones; the Galaxy S25 Ultra and its S25 siblings ship on February 7th.
    0 Commentarii ·0 Distribuiri ·32 Views
  • Meet EvaByte: An Open-Source 6.5B State-of-the-Art Tokenizer-Free Language Model Powered by EVA
    www.marktechpost.com
    Tokenization, the process of breaking text into smaller units, has long been a fundamental step in natural language processing (NLP). However, it presents several challenges. Tokenizer-based language models (LMs) often struggle with multilingual text, out-of-vocabulary (OOV) words, and inputs like typos, emojis, or mixed-code text. These issues can reduce model robustness and add complexity to preprocessing pipelines. Furthermore, tokenization often fails to adapt seamlessly to multimodal tasks, creating inefficiencies and complicating scalability. Addressing these limitations requires moving beyond token-based processing to a more universal and adaptable approach.University of Hong Kong Researchers propose EvaByte, an open-source tokenizer-free language model designed to address these challenges. With 6.5 billion parameters, this byte-level model matches the performance of modern tokenizer-based LMs while requiring 5x less data and delivering 2x faster decoding speeds. EvaByte is powered by EVA an efficient attention mechanism designed for scalability and performance. By processing raw bytes instead of relying on tokenization, EvaByte can handle diverse data formatsincluding text, images, and audiowith consistency and ease. This approach eliminates common tokenization issues, such as inconsistent subword splits and rigid encoding boundaries, making it a robust choice for multilingual and multimodal tasks. Additionally, its open-source framework invites collaboration and innovation, making cutting-edge NLP accessible to a wider community.Technical Details and BenefitsEvaByte employs a byte-level processing strategy, using raw bytes as the fundamental units for training and inference. This design inherently supports all languages, symbols, and non-textual data without the need for specialized preprocessing. Its 6.5B parameter architecture strikes a balance between computational efficiency and high performance.Key benefits of EvaByte include:Data Efficiency: The model minimizes redundancy by operating at the byte level, achieving competitive results with significantly smaller datasets.Faster Decoding: EvaBytes streamlined architecture enhances inference speed, making it suitable for real-time applications.Multimodal Capabilities: Unlike traditional LMs, EvaByte extends naturally to multimodal tasks, allowing unified processing of diverse data types.Robustness: By eliminating tokenization, EvaByte handles a wide range of input formats consistently, improving reliability across applications.Results and InsightsEvaBytes performance is notable. Despite using 5x less data, it achieves comparable results to leading tokenizer-based models in standard NLP benchmarks. Its ability to generalize across languages makes it particularly effective in multilingual scenarios, where it consistently outperforms traditional models. EvaByte also demonstrates strong performance in multimodal tasks like image captioning and audio-text integration, achieving competitive results without extensive fine-tuning.The open-source release includes pre-trained checkpoints, evaluation tools, and integration with Hugging Face, making it accessible for experimentation and development. Researchers and developers can leverage EvaByte for applications ranging from conversational agents to cross-modal information retrieval, benefiting from its efficiency and versatility.ConclusionEvaByte offers a thoughtful solution to the limitations of traditional tokenization, presenting a tokenizer-free architecture that combines efficiency, speed, and adaptability. By addressing long-standing challenges in NLP and multimodal processing, EvaByte sets a new standard for language models. Its open-source nature fosters collaboration and innovation, ensuring that advanced NLP capabilities are available to a broader audience. For those looking to explore cutting-edge NLP solutions, EvaByte represents a significant step forward in language understanding and generation.Check out the Details, Models on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our65k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Meet 'Height':The only autonomous project management tool (Sponsored)
    0 Commentarii ·0 Distribuiri ·33 Views
  • Google DeepMind Introduces Mind Evolution: Enhancing Natural Language Planning with Evolutionary Search in Large Language Models
    www.marktechpost.com
    It can significantly enhance LLMs problem-solving capabilities by guiding them to think more deeply about complex problems and effectively utilize inference-time computation. Prior research has explored various strategies, including chain-of-thought reasoning, self-consistency, sequential revision with feedback, and search mechanisms guided by auxiliary verifiers or evaluators. Search-based methods, particularly when paired with solution evaluators, leverage additional computational resources to explore a broader set of solution candidates. Techniques like best-of-N and tree search harness this capability to increase the likelihood of identifying successful solutions by examining a more extensive solution space.Recent efforts have combined LLMs with evolutionary search for optimization tasks, such as numerical and combinatorial problems and natural language planning. Unlike earlier studies that required task formalization in structured spaces, these approaches evolve solutions directly in natural language, bypassing the need for expert knowledge in formalizing tasks. Evolutionary search has also been applied to prompt optimization and multi-agent system design, such as EvoAgent, which evolved agents for problem-solving. However, these approaches often achieved limited success compared to methods like Gemini 1.5 Flash, demonstrating significant improvements in tasks like the TravelPlanner benchmark. Additionally, program-based evaluators integrated during evolutionary search provide reliable feedback to refine solutions, a technique widely adopted in code generation and response refinement across various domains. While learned feedback models or self-evaluators have been explored, they often suffer from noise and unreliability, presenting opportunities for future advancements.Researchers from Google DeepMind, UC San Diego, and the University of Alberta introduced Mind Evolution, an evolutionary search strategy designed to enhance inference-time computation for LLMs. Unlike previous methods like Best-of-N or sequential refinement, Mind Evolution uses a genetic approach to iteratively generate, refine, and recombine candidate solutions in natural language. It avoids formalizing tasks by relying on a solution evaluator, enabling higher success rates in natural language planning tasks like TravelPlanner and Natural Plan. Mind Evolution achieved 95.6% success on TravelPlanner and introduced new benchmarks like StegPoet, showcasing its versatility across challenging, non-formalized domains.Mind Evolution integrates a genetic search approach with an LLM and customized prompts to efficiently address natural language planning tasks. It employs language-based genetic algorithms, where solutions are represented in natural language, enabling LLMs to facilitate key operations like crossover, mutation, and island reset. The process begins by generating initial solutions through LLM-driven prompts. Solutions are iteratively refined using a Refinement through Critical Conversation (RCC) process involving critic and author roles for evaluation and improvement. The framework incorporates Boltzmann tournament selection, cyclic migration between islands, and periodic island resets to sustain diversity and optimize solutions effectively.The experiments evaluate Mind Evolution on three natural language planning benchmarks: TravelPlanner, Trip Planning, and Meeting Planning, excluding Calendar Scheduling due to its simplicity. The primary model, Gemini 1.5 Flash, is used with specified hyperparameters, while a two-stage approach incorporates Gemini 1.5 Pro for unsolved cases, improving cost efficiency. Mind Evolution outperforms baselines, achieving over 95% success in TravelPlanner and Trip Planning and 85% in Meeting Planning, with near-perfect results using the two-stage approach. Metrics such as success rate, LLM calls, token usage, and API costs highlight the efficiency of Mind Evolutions evolutionary search strategy compared to baselines.In conclusion, Mind Evolution introduces an evolutionary search strategy to enhance inference-time computation for complex natural language planning tasks, focusing on stochastic exploration and iterative refinement. Unlike methods relying on formal solvers, Mind Evolution leverages language models to generate, recombine, and refine candidate solutions, requiring only a solution evaluator. It outperforms strategies like Best-of-N and Sequential Revision in benchmarks such as TravelPlanner, Natural Plan, and the newly introduced StegPoet. Controlling for inference costs, it achieves remarkable success, solving over 98% of problem instances in TravelPlanner and Natural Plan benchmarks using Gemini 1.5 Pro, demonstrating its effectiveness without formal solver dependency.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our65k+ ML SubReddit. Sana Hassan+ postsSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions. Meet 'Height':The only autonomous project management tool (Sponsored)
    0 Commentarii ·0 Distribuiri ·20 Views
  • Data Scientists in the Age of AI Agents and AutoML
    towardsai.net
    Author(s): Edoardo De Nigris Originally published on Towards AI. Uncomfortable reality: In the era of large language models (LLMs) and AutoML, traditional skills like Python scripting, SQL, and building predictive models are no longer enough for data scientist to remain competitive in the market.Generated with DALL-E 3Are we cooked? It depends. In this article i will give my 2 cents on what I think its useful to focus on to be a strong candidate from 2025 onward.Coding skills remain important, but the real value of data scientists today is shifting. Its less about just building models and more about how those models fit into scalable, business-critical systems usually in the cloud.The role of a data scientist is changing so fast that often schools cant keep up. Universities still mostly focus on things like EDA, data cleaning, and building/fine-tune models. These are important, but theyre just a small part of what companies actually need now. Why? Because the job isnt just about coding in notebooks anymore its about building end-to-end solutions that actually work in the real world.Why?We reached a point where have tons of pre-trained models, often theres no need to re-invent everything from scratch, we can just work at a higher level of abstractionAI agents are becoming a thingAutoML and other low-code platforms are making coding skills less criticalIn this scenario I believe a data scientist has to differentiate him/herself and is required to master the entire lifecycle of the data: from building data pipelines, building and optimizing model training, mastering containers/orchestrators, deployment and beyond. Simply put, focusing solely on data analysis, coding or modeling will no longer cuts it for most corporate jobs.What to do then? My personal opinion: its more important than ever to be an end-to-end data scientist.Yes I know, the bar is getting higher, the era of scripting and modeling in Jupyter notebooks alone is over.Data roles will be less focused on coding and more on having a general understanding of the whole data infrastructure and the business. As an analogy think of it like running a restaurant. The data scientist is the cheftheyre in charge of the big, high-impact decisions, like creating the menu, choosing the ingredients, and designing the vibe of the place. Meanwhile, AI agents (or autoML) are like the kitchen assistants, waiters, and cashiersthey handle the repetitive, routine coding tasks to keep everything running smoothly. The chefs job is to focus on the creative and strategic work that makes the restaurant stand out, while the AI takes care of the rest.In this regard, I believe the future of data science belongs to those:who can connect the dots and deliver results across the entire data lifecycle.Have strong business acumen and deliver solution that are either widely used or that drives revenues / cut costs.Lets dig into it. I think a competitive data professional in 2025 must possess a comprehensive understanding of the entire data lifecycle without necessarily needing to be super good at coding per se.These are instead some of the skills that I would strongly master:Theoretical foundation: A strong grasp of concepts like exploratory data analysis (EDA), data preprocessing, and training/finetuning/testing practices, ML models remains essential. You have to understand data, how to extract value from them and how to monitor model performances.Programming expertise: A medium/high proficiency in Python and SQL is enough. These two languages cover most data science workflows. Additionally, languages like DAX can be helpful for specific use cases involving data models and dashboards. Emphasis not much on producing code, but rather to understanding and customizing it.Model deployment: The ability to build applications that operationalize models, such as Flask or Django apps, is increasingly vital. Thus a basic understanding of html to create simple frontends, as well as of hosting applications in cloud services like Google Cloud Run or Heroku. This creates a massive advantage when you want to quickly create an MVP that stakeholders can work with immediately.Containerization and orchestration: Familiarity with Docker, Containers, Airflow/Kubeflow and Kubernetes ensures to be able to provide consistency and scalability across different environments.Cloud platforms: Expertise in at least one major cloud provider (e.g., AWS, Google Cloud, or Azure) is essential. For example in the Google Cloud ecosystem, understanding how different tools interact with each other: BigQuery, Cloud Storage, Cloud Build, Cloud Run, Vertex AI, Container Registry, and Composer like AirFlow or Kubeflow are increasingly indispensable.CI/CD practices: Yes, you need to be also decent at software development. At least know the best practices of continuous integration and delivery (CI/CD) processes using GitHub for version control, YAML files for build automation etc.Post-deployment monitoring and maintenance: Managing deployed models includes monitoring for data drift, model performance issues, and operational errors, as well as performing A/B testing on your different models. Tools like Google Cloud Monitoring, logging frameworks, and artifact management systems are essential for maintaining reliability and transparency.Understanding Data Model and Feature Stores: The biggest lie that has been told to students and young practitioners is that datasets and features are already there to be analyzed. In reality you spend most of the time actually building them from scratch, in a way that is re-usable in the future and/or by other teams in your company.And also, the most underrated skill: business acumenKnowing how to communicate to non-technical people is one of the most valuable skill. You must be able to explain complex thing easily without dumbing them down.Business understanding of the data you are working with is what drives ultimate value and it is hard to be replaced by AI.Project management skills in understanding how quickly to iterate on data projects, from an MVP to a Final product.Ability to evaluate costs for projects coming 3rd party consulting companiesThis holistic approach aligns closely with the principles of MLOps (Machine Learning Operations), a practice that combines machine learning with software engineering and DevOps to ensure scalable, maintainable, and efficient workflows.While some might argue that data scientists focus primarily on models in Jupyter notebooks, data engineers manage tables and data pipelines, cloud architects handle infrastructure, and machine learning engineers specialize in building and optimizing pipelines, these roles are increasingly overlapping. In my opinion, the boundaries between them will continue to blur as businesses prioritize end-to-end solutions and cross-functional expertise.Thank you for your time, I am curious to know your opinions in the comment!Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentarii ·0 Distribuiri ·32 Views
  • Nosferatu Is Now Available To Preorder on 4K UHD and Blu-Ray, Releasing February 18
    www.ign.com
    Physical media and horror fans, rejoice! Robert Eggers' gothic horror Nosferatu is now available to preorder on 4K UHD and Blu-ray. There's a standard version of the film available to preorder now, which will set you back $27.95, and a fancy limited edition steelbook for $40.35. Both are set to release this year on February 18, so you can enjoy a spooky night in in just a few weeks' time.Nosferatu 4K Steelbook PreordersOut February 18, 2025Nosferatu (Steelbook) (4K Ultra HD + Blu-ray + Digital Copy)$40.35 at WalmartGet it at WalmartSee it at Amazon (sold out)See it at Gruv (sold out)At the moment, both Gruv and Amazon are sold out of the steelbook but Walmart still has stock available. This steelbook features a 4K UHD version of the film, a Blu-ray, and a digital copy tucked away in a sleek steelbook case with Bill Skarsgrd's Count Orlok on the cover and Lily-Rose Depp's Ellen on the back. It also comes with an extended cut of the film alongside the theatrical cut.Nosferatu 4K UHD, Blu-Ray, and Digital PreorderOut February 18, 2025Nosferatu (4K Ultra HD + Blu-ray + Digital)Similar to the steelbook, the standard release comes with a 4K UHD version of the film, a Blu-ray, and a digital copy. You'll also get the extended cut of the film alongside the theatrical cut.Nosferatu 4K UHD and Blu-Ray Bonus FeaturesBoth the steelbook and standard 4K release come with a few bonus features, including:Deleted ScenesNosferatu: A Modern MasterpieceFeature Commentary with Writer/Director Robert EggersWe had high praise for Nosferatu in our 9/10 review. Writer Siddhant Adlakha called it "Robert Eggers' finest work, given how it both boldly stands on its own as a gothic vampire drama and astutely taps into the original texts F.W. Murnau's silent classic and Bram Stoker's novel Dracula." If you're curious about where you can stream Nosferatu instead, have a look at our Nosferatu streaming guide for additional information. And to see even more physical media releasing soon, check out our roundup of upcoming 4K UHD and Blu-ray release dates.Hannah Hoolihan is a freelancer who writes with the guides and commerce teams here at IGN.
    0 Commentarii ·0 Distribuiri ·32 Views