0 Kommentare
0 Anteile
15 Ansichten
Verzeichnis
Verzeichnis
-
Please log in to like, share and comment!
-
WWW.MARKTECHPOST.COMOpenAI Launches gpt-image-1 API: Bringing High-Quality Image Generation to DevelopersOpenAI has officially announced the release of its image generation API, powered by the gpt-image-1 model. This launch brings the multimodal capabilities of ChatGPT into the hands of developers, enabling programmatic access to image generation—an essential step for building intelligent design tools, creative applications, and multimodal agent systems. The new API supports high-quality image synthesis from natural language prompts, marking a significant integration point for generative AI workflows in production environments. Available starting today, developers can now directly interact with the same image generation model that powers ChatGPT’s image creation capabilities. Expanding the Capabilities of ChatGPT to Developers The gpt-image-1 model is now available through the OpenAI platform, allowing developers to generate photorealistic, artistic, or highly stylized images using plain text. This follows a phased rollout of image generation features in the ChatGPT product interface and marks a critical transition toward API-first deployment. The image generation endpoint supports parameters such as: Prompt: Natural language description of the desired image. Size: Standard resolution settings (e.g., 1024×1024). n: Number of images to generate per prompt. Response format: Choose between base64-encoded images or URLs. Style: Optionally specify image aesthetics (e.g., “vivid” or “natural”). The API follows a synchronous usage model, which means developers receive the generated image(s) in the same response—ideal for real-time interfaces like chatbots or design platforms. Technical Overview of the API and gpt-image-1 Model OpenAI has not yet released full architectural details about gpt-image-1, but based on public documentation, the model supports robust prompt adherence, detailed composition, and stylistic coherence across diverse image types. While it is distinct from DALL·E 3 in naming, the image quality and alignment suggest continuity in OpenAI’s image generation research lineage. The API is designed to be stateless and easy to integrate: from openai import OpenAI import base64 client = OpenAI() prompt = """ A children's book drawing of a veterinarian using a stethoscope to listen to the heartbeat of a baby otter. """ result = client.images.generate( model="gpt-image-1", prompt=prompt ) image_base64 = result.data[0].b64_json image_bytes = base64.b64decode(image_base64) # Save the image to a file with open("otter.png", "wb") as f: f.write(image_bytes) Unlocking Developer Use Cases By making this API available, OpenAI positions gpt-image-1 as a fundamental building block for multimodal AI development. Some key applications include: Generative Design Tools: Seamlessly integrate prompt-based image creation into design software for artists, marketers, and product teams. AI Assistants and Agents: Extend LLMs with visual generation capabilities to support richer user interaction and content composition. Prototyping for Games and XR: Rapidly generate environments, textures, or concept art for iterative development pipelines. Educational Visualizations: Generate scientific diagrams, historical reconstructions, or data illustrations on demand. With image generation now programmable, these use cases can be scaled, personalized, and embedded directly into user-facing platforms. Content Moderation and Responsible Use Safety remains a core consideration. OpenAI has implemented content filtering layers and safety classifiers around the gpt-image-1 model to mitigate risks of generating harmful, misleading, or policy-violating images. The model is subject to the same usage policies as OpenAI’s text-based models, with automated moderation for prompts and generated content. Developers are encouraged to follow best practices for end-user input validation and maintain transparency in applications that include generative visual content. Conclusion The release of gpt-image-1 to the API marks a pivotal step in making generative vision models accessible, controllable, and production-ready. It’s not just a model—it’s an interface to imagination, grounded in structured, repeatable, and scalable computation. For developers building the next generation of creative software, autonomous agents, or visual storytelling tools, gpt-image-1 offers a robust foundation to bring language and imagery together in code. Check out the Technical Details. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Nishant NNishant, the Product Growth Manager at Marktechpost, is interested in learning about artificial intelligence (AI), what it can do, and its development. His passion for trying something new and giving it a creative twist helps him intersect marketing with tech. He is assisting the company in leading toward growth and market recognition.Nishant Nhttps://www.marktechpost.com/author/nishantn/Google AI Introduces Ironwood: A Google TPU Purpose-Built for the Age of InferenceNishant Nhttps://www.marktechpost.com/author/nishantn/Meet Amazon Nova Act: An AI Agent that can Automate Web TasksNishant Nhttps://www.marktechpost.com/author/nishantn/Anthropic Introduces New Prompt Improver to Developer Console: Automatically Refine Prompts With Prompt Engineering Techniques and CoT ReasoningNishant Nhttps://www.marktechpost.com/author/nishantn/ElevenLabs Introduces Voice Design: A New AI Feature that Generates a Unique Voice from a Text Prompt Alone0 Kommentare 0 Anteile 11 Ansichten
-
TOWARDSAI.NETExploring Deep Learning Models: Comparing ANN vs CNN for Image RecognitionExploring Deep Learning Models: Comparing ANN vs CNN for Image Recognition 0 like April 24, 2025 Share this post Author(s): SETIA BUDI SUMANDRA Originally published on Towards AI. Have you ever wondered how well Artificial Neural Networks (ANN) perform compared to Convolutional Neural Networks (CNN) in classifying images? In this article, I’ll walk you through a hands-on project where we train both ANN and CNN models using the CIFAR-10 dataset and build a fun interactive prediction UI that lets you upload your own images and see how each model performs in real-time! Let’s break it all down line by line, block by block, with enough clear explanations. import numpy as npimport matplotlib.pyplot as pltimport seaborn as snsfrom tensorflow.keras import layers, modelsfrom tensorflow.keras.datasets import cifar10import ipywidgets as widgetsfrom IPython.display import display, clear_outputfrom PIL import Imageimport io,osfrom IPython.display import display, clear_outputimport ipywidgets as widgetsfrom ipywidgets import Layoutfrom collections import OrderedDict Loads essential packages for: Data manipulation: numpyVisualization: matplotlib & seabornDataset and building/training models: tensorflow.kerasUI interaction: ipywidgets, IPython.displayImage processing: PIL(X_train, y_train), (X_test, y_test) = cifar10.load_data()# NormalisasiX_train, X_test = X_train / 255.0, X_test / 255.0# Label reshapey_train = y_train.flatten()y_test = y_test.flatten()Loads CIFAR-10, a dataset of 60,000 32×32 color images from 10 categories.Normalizes the pixel values from [0–255] to [0–1].Flattens the label arrays to 1D for compatibility with model training.class_names = ['Airplane', 'Automobile', 'Bird', 'Cat', 'Deer', 'Dog', 'Frog', 'Horse', 'Ship', 'Truck'] It provides human-readable names for the 10… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post0 Kommentare 0 Anteile 14 Ansichten
-
WWW.IGN.COMMeta Quest 3S VR Headset Is on Sale for $30 OffIf you've wanted to give VR gaming a try but the cost of entry has kept you at bay, then you might be interested in the first actual discount on Meta Quest 3S for 2025. Right now, you can save $30 off the wireless VR headset, whether you get the 128GB model or the 256GB one.To sweeten the pot even more, the package also includes a copy of Batman: Arkham Shadow VR game and a three-month trial of Meta Quest+. In IGN's 8/10 review, Dan Stapleton wrote that "Batman: Arkham Shadow makes most of the Arkham series' defining gameplay work respectably well in VR, and its mystery story pays off."Free to Play VR Sandbox Game: DigiGods on Meta QuestDigiGods is a free-to-play, physics-based, VR game that allows players to create, play, and share in a sandbox environment. Safety is a top priority with the use of AI content filtering and human moderators to ensure a safe and positive environment.See it at MetaMeta Quest 3S VR Headset with Batman: Arkham ShadowMeta Quest 3S 128GB VR HeadsetThe Quest 3S is an improvement over the original Quest 2 in every way and, amazingly, without a price increase. It also adopts many of the same features of the more expensive Quest 3, like the new and improved Touch controllers, the upgraded SnapDragon APU, and support for full color AR passthrough. In IGN's 9/10 Quest 3S review, Gabriel Moss wrote that "raw processing power, full-color passthrough, and snappy Touch Plus controllers make the Quest 3S a fantastic standalone VR headset that also brings entry-level mixed-reality gaming to the masses for – arguably – the very first time.What really sets this deal above all other VR deals is that the Meta Quest 3S can be played completely untethered. That means you can play games like Beat Saber or Pistol Whip without having to own a powerful gaming PC or a PlayStation 5 console. Try to find another standalone VR headset at this price and you'll come up empty.How Is the Quest 3S Different from the Quest 3?Even at retail price, the Quest 3S comes in at $200, or 40% cheaper than the $500 Quest 3. Obviously, some compromises were made to get the 3S to its competitive price point. The spec comparisons are listed below:Quest 3S vs. Quest 3 SimilaritiesSnapdragon XR2 Gen 2 processorTouch Plus controllers120Hz refresh rateMixed reality passthrough (same cameras, different layout)Quest 3S vs. Quest 3 DifferencesLower per-eye resolution (1832x1920 vs 2064×2208)Fresnel lens vs. pancake lensLower FOV (96°/90° vs 104°/96°)Smaller storage capacity (128GB vs 512GB)Longer battery life (2.5hrs vs 2.2hrs)In essence, the Quest 3S is nearly the same headset but with downgraded optics. On the plus side, since both headsets use the same processor, running at a lower resolution reduces the load on the APU, which could theoretically improve performance in games and also account for the increased battery life. For the price, the Quest 3S is unquestionably a better value than the Quest 3, and a better choice for most gamers, especially if the Quest 3 was completely out of your budget in the first place. Compared to the previous generation Quest 2, the decision is even easier.PlayWhy Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.0 Kommentare 0 Anteile 13 Ansichten
-
WWW.DENOFGEEK.COMHow Doom Changed Gaming ForeverMy mother was never fond of me playing violent games, so I had to sneak sessions of Mortal Kombat and Wolfenstein 3D away from her discerning eye. One game that was so notorious that I had to take extra precautions in playing it, either on our family computer (thanks, shareware!) or even at friends’ houses, was 1993’s Doom. Even now, over 30 years later, Doom still feels like a daring gaming franchise to jump into, even as the industry is crowded with first-person shooters on the eve of the launch of the series’ latest title Doom: The Dark Ages. Even aside from its reputation as one of the goriest and all-around gnarliest shooting games around, Doom changed gaming forever, even more so than publisher id Software’s earlier effort, Wolfenstein 3D, had. From completely revolutionizing the shooter genre and catapulting first-person shooters into the gaming mainstream to inspiring everything from modding to speedrunning, the influence of Doom over gaming can’t be overstated. Here’s how Doom changed gaming forever, with its legacy still acutely felt over 30 years since the franchise’s launch. The Doom Effect Though first-person shooters, in their most basic and rawest form, have existed since at least 1973’s Maze War, they were popularized by 1992’s Wolfenstein 3D. Doom was a top-to-bottom level-up effort from id and its development team, including adding atmospheric lighting, programming a wide variety of unique enemies, adding texture mapping to create more detailed environments, and improving the overall sound design. The significant upgrade in technical presentation and refined gameplay did not go unnoticed by the industry or fans and, by the end of 1995, it was estimated that more gamers had Doom installed on their home computers than Windows 95. Doom was ported to virtually every gaming platform after its 1993 PC debut, a distinction that continues to hold every time a new console is released. Beyond Doom and its ports, id Software led the charge in software licensing, readily licensing out the technology, including and especially the game engine, it used to make Doom to outside developers for a licensing fee. This led to a wave of Doom clones, games that at least partially used Doom’s graphics and/or gameplay technology, with the game’s reach so wide that even the breakfast cereal Chex licensed the engine to create their cult classic 1996 game Chex Quest. Doom’s influence was readily felt in games that didn’t explicitly use id Software’s technology in their development, like Duke Nukem 3D and Half-Life. Less than a year after Doom, id released the similarly successful and influential Doom II which, while not radically different in terms of gameplay or presentation, further refined what the development team had crafted before. Using Doom as a foundation, id then launched the Quake franchise in 1996, which continued to change the course of first-person shooter games and games using 3D environments moving forward. The Birth of a Gaming Movement Something that Doom probably doesn’t get as much credit for is what it did to foster a gaming community beyond what the arcade quarter-munchers and Nintendo Club had done years earlier. A fan community quickly sprung around the game, something that id Software actively helped support as they immediately recognized its importance to the game and their brand. Developers John Carmack and John Romero insisted on making Doom’s game files relatively easy for users to access, encouraging fan-made mods and user-generated levels to their game, despite internal concerns about this move’s proprietary implications. The developers also built-in a feature that allowed players to record their own replays and share them, along with providing timestamps of how long it took each of the development team to beat the game’s levels, encouraging them to do better. This essentially laid the groundwork for speedrunning, a cornerstone of the gaming community that has only grown more prominent in the ensuing decades. But one major feature that cemented Doom’s legacy was its local area network (LAN) multiplayer modes, letting players battle each other in what id Software dubbed deathmatches. All those LAN parties and PC cafe deathmatches, fueled by my body weight in energy drinks and cheap snacks, really owe a massive debt to Doom for laying this gaming foundation. Doom has returned a handful of times in the years since Doom II, though the franchise seems to work best when it remembers its own legacy, leaning into deliriously violent gunplay that wears its heavy metal and dark fantasy influences proudly on its sleeve. Doom: The Dark Ages looks to take those fantasy sensibilities even more prominently, quietly rethinking what Doom can be as it reinvents the massively influential franchise for a new generation. My mom never warmed up to Doom, still seeing it as the paradigm for violent video games, but she accepted that the doors the game opened would remain that way. Doom had revolutionized gaming, not just in terms of popularizing first-person shooters but in helping usher in the medium to more detailed and immersively realized 3D environments. And now with an entire community rallying around it, Doom helped bring gamers together into the growing sub-culture it is today. Doom: The Dark Ages will be released May 15 on PlayStation 5, Xbox Series X|S, and PC.0 Kommentare 0 Anteile 8 Ansichten
-
WWW.HOUSEBEAUTIFUL.COMEverything to Know About Tudor-Style Homes, According to an ArchitectJump to:Reminiscent of fairy-tale cottages and grand English countryside manors, the Tudor architectural style brings a romantic, old-world grandeur to American neighborhoods. Originating in the mid-1800s and growing in prominence through the 1920s and ‘30s, Tudor-style homes were immensely popular to build and live in for growing families until the midcentury era came around, when they suddenly weren’t. The Tudor architectural style is quite distinctive—from the pitched roof to the detailed masonry, you know a Tudor house when you see one, and this ultimately led to their decline. As the style emerges once again (everything has a trend cycle), American homeowners and builders are once again curious about the nostalgia and beauty of historic Tudor homes. Like many other architectural styles, Tudor has very specific elements that differentiate it from others of that era. Read on to learn more about Tudor architecture and what makes these homes so beautiful.Additional copy by Medgina Saint-Elien.Architecture 101History of Tudor ArchitectureTudor-style houses began to appear in the United States in the mid-19th century and continued to grow in popularity until World War II. The Tudor style movement is technically a revival of “English domestic architecture, specifically medieval and post-medieval styles from 1600 to 1700,” says Peter Pennoyer, FAIA, of Peter Pennoyer Architects. Because these homes mimic a style that's designed to weather cold climates with lots of rain and snow, they're best suited for the northern half of the United States, though they're found in other areas of the country as well.Tudor-Style House Exterior FeaturesAimee Mazzenga“These houses, with their myriad materials, solid masonry, elaborate forms, and decorations, were expensive to build and mostly appeared in wealthy suburbs,” Pennoyer says. They were nicknamed “Stockbroker's Tudors,” referencing owners who gained wealth during the booming 1920s. To appreciate the design of a Tudor-style house, you have to take note of the steeply pitched roof, often with multiple overlapping, front-facing gables of varying heights. The majority of Tudor exteriors are brick, but they're accented (often in those triangular gables) with decorative half-timbering: essentially a mock frame of thin boards filled in with stucco or stone. Subcategories of this architectural style include French Tudor homes, which are French country–inspired buildings made of stone and wood in the classic Tudor style, and American Tudor Revival homes, which feature a large gable, brick exterior, decorative timbering and accents, a shingled roof, and tall multipane windows.Related StoryTudor-Style Interior FeaturesThe Happy TudorTudor-style houses were typically designed with interiors that complemented the exterior in terms of design style. The asymmetry of the front façade of the house also enhanced the interior layout. “It offered great flexibility to the architect in terms of interior planning,” Pennoyer says. “The plan was not dictated by strict symmetry on the facades, allowing diversity in room heights, window placement, angled wings, etc.” Interiors are often heavily accented in dark wood as well. From ceiling beams to intricate wall paneling, Tudor homes can look as much like an English manor on the inside as they do on the outside.The windows used in Tudor-style homes are also a unique nod to medieval architecture. The windows are tall and narrow with multiple panes—sometimes rectangular, sometimes diamond-shaped. Large groupings of windows are common, and occasionally you'll see picturesque floating bay windows called oriel windows on the first or second story. Though often not in the center of the house, the front door is still a significant architectural feature of a Tudor home. It typically has a round arch at the top and tends to be bordered by a contrasting stone that stands out against the brick walls. Tudor chimneys are another notable element where the details stand out: They often have decorative chimney pots and a stone or metal extension at the top of the brick chimney. Related StoryModern Tudor ArchitecturePeter Gridley//Getty ImagesAccording to Pennoyer, innovative masonry veneer techniques developed in the early 1900s made brick and stone homes more affordable to build. However, the intricacies of Tudors were still quite expensive for the average home builder. This led to the style fizzling out after World War II, when the country began focusing on affordable housing developments that could be built quickly. During the height of the Colonial Revival period (1910–1940), “this style comprised 25 percent of the suburban houses built,” Pennoyer says. Tudors are rare to find today. The unique style is still an appealing option for some buyers who want to own a historic home, but it isn't a popular house style among newly built homes the way Colonial and Farmhouse styles are. But designers are committed to restoring them to their original beauty and stature from the inside out.Molly Cuver PhotographyInterior designer Shannon Eddings says, “Keeping original elements whenever possible is key in a Tudor home. To try and replicate the cozy style of classic Tudor homes, we added built-in benches underneath the original windows.”According to Eddings, the charm of the Tudor design should remain a priority. The structure is a commitment, not a blank canvas. From Dutch doors to beadboard or an arched window, decorative accents are the secret to honoring the home without keeping it stuck in the past.Related StoryTudor Home Style FAQsWhat are the disadvantages of a Tudor home?Like many historic home styles, the maintenance and upkeep of a Tudor-style home can be seen as a disadvantage. Tudor homes can also be quite difficult to repair, requiring more expensive specialists to handle the detailed architecture. The historic materials originally used may no longer be available or more difficult to find. What is special about a Tudor house?Tudor-style homes are beloved because of their unique architectural appearance. Known for their pitched gabled roofs, decorative woodwork, and masonry, these homes are more unique compared to more modern colonial or ranch-style homes.What is the difference between a Tudor and a Craftsman?Tudor homes and Craftsman homes are different architectural styles. Tudor homes will have a steeper-pitched roof and narrower gables. Craftsman homes traditionally have front porches and columns connecting to the main structure of the home. Craftsman-style homes also feature less decorative woodwork on the exterior. Stunning Tudor Style Home InspirationFollow House Beautiful on Instagram and TikTok.Expert consulted:Peter PennoyerArchitectPeter Pennoyer, FAIA, is an architect, writer, educator, and the founding principal of Peter Pennoyer Architects. He has coauthored, with Anne Walker, five books on American architectural history and is an adjunct professor in the Urban Design and Architecture Studies program at New York University. He uses his scholarship and knowledge of New York City to participate in the civic dialogue among neighborhood groups, professionals, and government agencies by advocating for positions and designs he feels reflect the values of his firm: architecture that is contextual and respectful to the fabric of the city. Peter is president of the Whiting Foundation, a nonprofit that supports writers and scholars, is a trustee of the Morgan Library & Museum, and is a member of the National Register of Peer Professionals in the Design Excellence Program of the General Services Administration. He has served as chairman and board member of the Institute of Classical Architecture & Art, and in 2014, he was elected to the College of Fellows of the American Institute of Architects. Peter’s lectures and presentations have reached audiences across the country and cover topics ranging from architectural history and preservation theory to current practice. He was recently honored by the Preservation League of New York State with the Pillar of New York Award for his research and books on historic architecture and by the College of Charleston with the Albert Simons Medal of Excellence for historic preservation and traditional architectural design. Peter is a graduate of Columbia University (B.A.) and Columbia University’s Graduate School of Architecture, Planning and Preservation (M. Arch.). In 2017, he received an Honorary Degree of Doctor and Fine Arts from the New York School of Interior Design.0 Kommentare 0 Anteile 15 Ansichten
-
THENEXTWEB.COMFertility startup ‘rejuvenates’ human eggs to boost chances of conceptionGerman biotech startup Ovo Labs has developed new technologies to “rejuvenate” human eggs, potentially boosting the chances of conception. The therapeutics are designed to enhance in vitro fertilisation (IVF), one of the most transformative advances in reproductive medicine. The first baby was born via IVF more than 40 years ago. Since then, the technology has helped millions of women get pregnant. However, IVF can put significant emotional, psychological, and financial strain on patients. It is often unsuccessful on the first attempt. Some try multiple times without success. For many, IVF ultimately does not lead to parenthood. REGISTER Ovo Labs wants to improve the odds. Based on 20 years of fertility research, the startup has developed three therapeutic treatments that reduce genetic errors in eggs. In doing so, the company aims to “dramatically” boost the number of women who can conceive in a single IVF attempt. A microscopic image of a human oocyte with the DNA visible in pink. Credit: Ovo Labs “By helping to increase the number of viable eggs, we aim to extend the reproductive window, empowering more couples to have children at a time that feels right to them,” said co-founder Professor Melina Schuh. Schuh is a world-leading fertility expert at the Max Planck Institute in Munich. She co-founded Ovo Labs in January alongside her former colleague Dr Agata Zielinska, a Polish-British fertility scientist, and German biotech expert Dr Oleksandr Yagensky. Ovo Labs has already proven that it can improve the quality of eggs in old mice. The company has also shown it can successfully treat isolated human eggs. However, its technology is not yet approved for human trials. If the treatment gets the green light, Ovo Labs hopes it will become standard practice in IVF. To get there, though, the startup will need time, as the regulatory approval process for new medical treatments is notoriously slow. It will also need money. To that end, Ovo Labs today announced it has secured €4.6mn, its first batch of external funding. Creator Fund and Local Globe led the round, with participation from Blue Wire Capital, Ahren Innovation Capital, and angel investor Antonio Pellicer. “It is inspiring to see European scientists of this calibre launch a company solving such a fundamental question facing humanity,” said Jamie Macfarlane, founder of UK-based Creator Fund. Schuh and Zielinska spent years together researching eggs at Bourn Hall Clinic, the world’s first IVF centre (recently featured in the Netflix movie Joy). Their work shed light on why egg quality declines with age and the potential therapies. By the time a woman reaches 40, over 70% of her eggs carry genetic abnormalities, according to data from the London Egg Bank, making it much harder to conceive. By reducing genetic errors, Ovo Labs hopes to improve the chances of successful pregnancies. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with0 Kommentare 0 Anteile 19 Ansichten
-
9TO5MAC.COMApple removing key robotics team from John Giannandrea’s oversightApple is preparing another leadership shakeup, this time for its secretive robotics team. Bloomberg reports that Apple is shifting its robotics team from AI chief John Giannandrea to John Ternus, its Senior Vice President of Hardware Engineering. This move comes just a month after Apple removed Siri from Giannandrea’s oversight after Tim Cook “lost confidence in the ability” of the former Google executive to “execute on product development.” Siri is now led by Mike Rockwell, the creator of Apple Vision Pro, who reports to software boss Craig Federighi. Bloomberg’s Mark Gurman reports that Apple will move the robotics team to Ternus’s purview “later this month.” Ternus currently leads hardware engineering for almost all Apple’s products, including the iPhone, iPad, Mac, and Vision Pro. Ternus is also a leading candidate to eventually replace Tim Cook as Apple CEO. The robotics team at Apple is working on multiple products, including a tabletop iPad-like device with a robotic arm that moves the display around. The team is led by Kevin Lynch, who also worked on the Apple Watch and Apple’s failed electric car project. In addition to the robotic iPad, the team is working on more advanced projects, such as a “mobile robot that can follow users around their homes.” In addition to the robotics team led by Giannandrea, a separate team working on “robotics and smart home technologies” also exists inside Apple. That team, led by Brian Lynch and Matt Costello, was already under the purview of Ternus. By shifting Giananndrea’s robotics responsibilities to Ternus, both teams are now under a single leader. The move also means that Ternus now has control over key AI teams, Bloomberg reports: The relocation of Lynch’s unit is also notable because it gives Ternus control over key AI operating system and algorithms teams, groups not typically managed by the hardware engineering department. Ternus briefly oversaw the Vision Pro software unit — until Rockwell moved with that team to the software engineering organization. That coincided with the Siri management shift last month. With this change, Giannandrea and his AI/ML group will have “more time to focus on underlying artificial intelligence technology.” Bloomberg says that Giannandrea hasn’t indicated he is leaving the company: Giannandrea hasn’t given his team any indication that he is planning to leave soon, but the continued shift of responsibilities has raised the prospect that the company may be preparing for a world without the executive at the helm of its AI efforts. Eight years after combining Apple’s AI teams into a single group with the hire of Giannandrea, a breakup of the AI and ML team is looking more likely, the people said. Read the full report at Bloomberg. My favorite iPhone accessories: Follow Chance: Threads, Bluesky, Instagram, and Mastodon. Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel0 Kommentare 0 Anteile 27 Ansichten
-
FUTURISM.COMHot Take Argues Why You Should Say Please and Thank You to ChatGPTAfter OpenAI CEO Sam Altman bemoaned the massive additional costs of people saying "please" and "thank you" to ChatGPT, one New York Times reporter is making the case that it's worth the price.In a new piece, NYT culture writer Sopan Deb acknowledged that the financial and environmental toll of those additional few words can be substantial — but for the sake of our humanity, it may well be worth it.With chatbots integrating steadily into our lives, our relationships with these technologies that pose such existential threats to our labor — and perhaps our lives — have never mattered more.When discussing the subject with Massachusetts Institute of Technology sociologist Sherry Turkle, the researcher said that for all the "parlor tricks" that lend them the appearance of consciousness, chatbots are "alive enough" to matter for those who use them regularly."If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it’s not, it’s alive enough for us to show courtesy to," Turkle told Deb.Despite that caveat, the MIT sociologist and bestselling author noted that chatbots don't care whether you "make dinner or commit suicide" after you step away from them. Per that line of thinking, an AI would also not "care" about how nice or rude we are to it — but there's a chance, if AI ever gains consciousness, that the situation could change.In 2014, playwright Madeleine George was nominated for a Pulitzer after her play, "The (curious case of the) Watson Intelligence," charmed theatergoers when presenting three distinct versions of Sherlock Holmes' trusty sidekick — including one that was an AI-powered robot. In the ensuing decade, she's continued to muse on human-AI relations, and has grown to believe that we can teach it a thing or two on how to be human.To George's mind, being polite to chatbots offers them the chance to "act like a living being that shares our culture and that shares our values and that shares our mortality" — though admittedly, that framework has its drawbacks."We’re connected. We are in a reciprocal relationship. That's why we use those pieces of language," the playwright told the NYT. "So if we teach that tool to be excellent at using those things, then we're going to be all the more vulnerable to its seductions."Whether acting as a shepherd for AI's burgeoning humanity or simply being kind for kindness' sake, the cost of "pleases" and "thank yous" seems way lower in context — and hey, companies like OpenAI are footing the bill anyway.More on human-AI relationships: Did Google Test an Experimental AI on Kids, With Tragic Results?Share This Article0 Kommentare 0 Anteile 23 Ansichten
-
THEHACKERNEWS.COM159 CVEs Exploited in Q1 2025 — 28.3% Within 24 Hours of DisclosureApr 24, 2025Ravie LakshmananVulnerability / Threat Intelligence As many as 159 CVE identifiers have been flagged as exploited in the wild in the first quarter of 2025, up from 151 in Q4 2024. "We continue to see vulnerabilities being exploited at a fast pace with 28.3% of vulnerabilities being exploited within 1-day of their CVE disclosure," VulnCheck said in a report shared with The Hacker News. This translates to 45 security flaws that have been weaponized in real-world attacks within a day of disclosure. Fourteen other flaws have been exploited within a month, while another 45 flaws were abused within the span of a year. The cybersecurity company said a majority of the exploited vulnerabilities have been identified in content management systems (CMSes), followed by network edge devices, operating systems, open-source software, and server software. The breakdown is as follows - Content Management Systems (CMS) (35) Network Edge Devices (29) Operating Systems (24) Open Source Software (14) Server Software (14) The leading vendors and their products that were exploited during the time period are Microsoft Windows (15), Broadcom VMware (6), Cyber PowerPanel (5), Litespeed Technologies (4), and TOTOLINK Routers (4). "On average, 11.4 KEVs were disclosed weekly, and 53 per month," VulnCheck said. "While CISA KEV added 80 vulnerabilities during the quarter, only 12 showed no prior public evidence of exploitation." Of the 159 vulnerabilities, 25.8% have been found to be awaiting or undergoing analysis by the NIST National Vulnerability Database (NVD) and 3.1% have been assigned the new "Deferred" status. According to Verizon's newly released Data Breach Investigations Report for 2025, exploitation of vulnerabilities as an initial access step for data breaches grew by 34%, accounting for 20% of all intrusions. Data gathered by Google-owned Mandiant has also revealed that exploits were the most frequently observed initial infection vector for the fifth consecutive year, with stolen credentials overtaking phishing as the second most frequently observed initial access vector. "For intrusions in which an initial infection vector was identified, 33% began with exploitation of a vulnerability," Mandiant said. "This is a decline from 2023, during which exploits represented the initial intrusion vector for 38% of intrusions, but nearly identical to the share of exploits in 2022, 32%." That said, despite attackers' efforts to evade detection, defenders are continuing to get better at identifying compromises. The global median dwell time, which refers to the number of days an attacker is on a system from compromise to detection, has been pegged at 11 days, an increase of one day from 2023. Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post. SHARE 0 Kommentare 0 Anteile 14 Ansichten