• Blanco, Estudio Jochamowitz Rivera, and Ghezzi Novak install giant eel at LIGA in Mexico City
    www.archpaper.com
    A large serpentine-like creature slithers across the floor, contorting its body to fit a 98-square-foot gallery space in Mexico City. The Uncomfortable Giant is long and winding, its body is piled on top of itself and crammed into the small exhibition spaces. The sculpture was designed and built by Peruvian architecture firms Blanco, Estudio Jochamowitz Rivera, and Ghezzi Novak. The Uncomfortable Giant, made of totora reeds harvested from Perus Lake Titicaca, has adapted from the Peruvian lake, to the pseudo-industrial exhibition rooms at the LIGA Space for Architecture in Mexico City. Blanco, Estudio Jochamowitz Rivera, and Ghezzi Novak are all architecture studios based in Lima, Peru. The Uncomfortable Giant was the winning submission in LIGAs third open call for exhibitions. Founded in 2013 by Pamela Remy, Blanco focuses on art direction, editorial design and branding. Estudio Jochamowitz Rivera, run by architects Mariana Jochamowitz and Nicols Rivera, is an architectural practice that examines domesticity. Ghezzi Novak was founded by Arturo Ghezzi Novak and Gustavo Ghezzi Novak with an emphasis on a context-centered design approach.The sculpture is made of totora reeds harvested from Perus Lake Titicaca, used often as building material by the native Uros Chulluni community. (Patricio Ghezzi/ Courtesy LIGA)The oversized animal was built in Peru by Percy Coila, a native member of the Uros Chulluni community of the floating islands that surround Lake Titicaca to invoke the body of an enormous, ancient eel dwelling at the bottom of the lake. Located on the southern border of Peru, Lake Titicaca is inhabited by the Uros people on islands made of totora reeds. The people use the roots as their foundation for living, using it to construct buildings, boats, and other objects. The Uncomfortable Giant, coming from a lake, now rests in Mexico City, a sinking city built on top of Lake Texcoco centuries ago. The Uncomfortable Giant harvested from a lake in Peru, now sits in Mexico City, a sinking city built on top of Lake Texcoco. (Arturo Arrieta/Courtesy LIGA)The creatures tail erupts from the exhibition space, curling and tapering up into an opening within the gallery space. (Arturo Arrieta/Courtesy LIGA)Bulbous eyes emerge from one end of the sculpture, staring at patrons as they enter the exhibition. At the other end, the creatures tail erupts from the exhibition space, curling and tapering to a pronounced point. At 262 feet in length, The Uncomfortable Giant reimagines the benches traditionally crafted from totora reeds by the Uros community, so its only fitting that gallery-goers can mount and sit on the sculpture, harkening back to the materials more conventional use.The exhibit is open at LIGA for free admission until May 30, 2025.
    0 Kommentare ·0 Anteile ·21 Ansichten
  • New visuals of Freedom Plaza from OJB Landscape Architecture share details of the megaprojects riverfront park
    www.archpaper.com
    Freedom Plaza, the proposed megaproject for the long-underutilized site adjacent to the United Nations, seeks to bring housing, a museum dedicated to democracy, and hotels (and maybe even a casino) to a swath of land along the East River. In February 2024, Bjarke Ingels Group (BIG) and Soloviev Group, the landowner, shared visuals of the skyline-transforming mixed-use development that plans to set up between 38th and 41st Streets on 1st Avenue. New visuals and a video were shared this week of the 4.7 acres of park space planned for the project. They show how the riverfront site will be further activated for residential and recreational use. OJB Landscape Architecture has responded to community input to inform its vision for the landscape component of the development, envisioning ample lawn space, a riverside promenade, playground, and water garden in a pocket of Manhattan lacking access to generous green space.A promenade will abut the East River and the planned Museum of Freedom and Democracy, designed by BIG. (OJB Landscape Architecture/Soloviev Group)The design of Freedom Plaza draws on the natural beauty and cultural energy of New York City, creating a space where nature, art, and urban life coexist seamlessly, said Jim Burnett, president of OJB Landscape Architecture. From the East River Overlook to the intimate gardens, every detail has been carefully crafted to inspire and engage visitors. Amid the residential towers and BIG-designed Museum of Freedom and Democracy, a spiraling structure modeled after the concept of a Greek agora, will be a winding 1.2-mile network of pathways dotted with shrubbery, bench seating and picnic tables, and kiosks selling food and beverages. From the street, visitors can opt to take the shallow steps up into the core of the development or instead meander through the pathways.Anchor spaces of the landscape scheme include the 700-foot-long East River Promenade, a wide paved area alongside the river, and a playground for the youngest visitors furnished with climbing structures that seemingly mimic the trees planned for the site. According to a press release, the eight species were selected to promote biodiversity and stormwater absorption, and offer seasonal interest year-round. A 6,000-square-foot water garden appeals to those seeking nature; water drapes over the rocks in the river-like design.Looking ahead, Soloviev Group will finance the upkeep, security, and programming of the park. A new committee will help steer the parks operation, ensuring its public use and access.
    0 Kommentare ·0 Anteile ·22 Ansichten
  • I switched to Mac Studio M4 for two weeks - a Windows PC user's buying advice
    www.zdnet.com
    The new Mac Studio delivers impressive performance courtesy of its M4 Max chip. This hardware and its space-saving design make the desktop a must-have for professionals and creatives.
    0 Kommentare ·0 Anteile ·43 Ansichten
  • This mini PC is a powerful alternative to the Mac Mini - and it's on sale
    www.zdnet.com
    The Minisforum AI370 EliteMini combines high-end hardware with support for up to 4TB of storage in a sleek, compact design, making it ideal for even the smallest desk setups.
    0 Kommentare ·0 Anteile ·22 Ansichten
  • Pure Storages Next Wave Of Growth: AI And Hyperscalers
    www.forbes.com
    Pure Storage addresses the unique challenges of storage for AI with its new FlashBlade//EXA platform, a high-performance, disaggregated all-flash storage solution purpose-built for the demands of artificial intelligence and high-performance computing. The new solution addresses the shortcomings of traditional systems, which often suffer from critical bottlenecks in data ingestion, training, and inference.But Pure's story doesn't stop with its new storage solution. With a landmark deal to power Meta's hyper-scale storage infrastructure, Pure is now at the center of a broader industry shift, one where flash becomes not just a performance tier but the foundation of next-generation data centers.Taken together, Pures hyperscale momentum and its new disaggregated storage architecture point to the company's future. Pure is thinking about much more than traditional enterprise storage.Architecting Storage for AIAs AI and high-performance computing go mainstream, traditional approaches to storage are pushed to their limits. Modern AI workloads demand lightning-fast, high-concurrency access to massive volumes of structured and unstructured data, something legacy storage systems were never designed to handle. The result is bottlenecks that slow down data ingestion, choke training pipelines, and undercut the performance of expensive GPUs.Enter Pure Storages FlashBlade//EXA, a purpose-built, high-performance storage platform designed to meet the demands of AI and HPC at scale. With a disaggregated architecture, intelligent flash management, and industry-leading throughput, Pures latest offering is a big step forward in enabling real-time AI infrastructure.By combining Pures unique DirectFlash technology with its proven Purity operating system, FlashBlade//EXA eliminates inefficiencies in traditional SSD-based storage, reducing latency and maximizing throughput.The FlashBlade//EXA addresses the unique challenges of AI workloads with its disaggregated storage architecture. It enables independent scaling of data and metadata, ensuring optimal performance as AI models grow in complexity.Pure Storage FlashBlade//EXAPure StoragePure is one of the first major enterprise storage vendors to deliver a disaggregated architecture designed for AI. While competitors like Dell Technologies and NetApp have outlined future products that follow a similar architecture, Pure is the first traditional enterprise storage vendor to bring such a product to market.There's also a play beyond the traditional enterprise. Pure surprised many industry watchers with its decision to allow the new FlashBlade//EXA to utilize third-party storage arrays. This choice caters directly to the needs of a cloud-first AI world and eases the integration of the new offering into existing infrastructure.Pure also surprised the industry by enabling FlashBlade//EXA to integrate with third-party storage arrays a strategic move tailored to the needs of cloud-first AI infrastructure. This decision simplifies deployment in hybrid environments and signals a shift toward more open, flexible storage ecosystems.Pures Big Hyperscale PlayEarlier this year, Meta selected Pure Storage as the foundation for its next-generation storage infrastructure. This marks the first time a hyperscaler has standardized on Pure for all tiers of online storage, from low-cost archival to high-performance AI and ML workloads.Meta's rationale is straightforward: flash storage delivers better power efficiency, higher density, and significantly improved performance than traditional HDDs. While flash still commands a higher upfront cost, Pures DirectFlash architecture helps close that gap with superior efficiency and scalability.In a recent technical blog, Meta detailed how it wants to build its architecture using flash storage based on QLC NAND technology. QLC performs better than HDDs at a more economical price than the TLC NAND typically found in performance-oriented storage solutions, especially for read-intensive, large-scale workloads in hyperscale environments.Meta recently detailed its interest in QLC NAND flash, which offers greater capacity and better economics than performance-oriented TLC NAND, making it ideal for read-intensive workloads common in hyperscale environments. While HDDs still dominate cold and warm tiers, Meta sees QLC flash as the future, and its partnership with Pure reflects a clear flash-first roadmap for scaling to exabyte-level deployments.The real advantage lies in Pures integrated approach: combining intelligent software with custom-engineered flash hardware. This combination is vital to delivering the performance, reliability, and efficiency required at hyperscale.The real advantage lies in Pures integrated approach: combining intelligent software with custom-engineered flash hardware. This combination is vital to delivering the performance, reliability, and efficiency required at hyperscale.Analysts TakeFlashBlade//EXA enters a competitive market at a pivotal moment. Enterprises are actively seeking alternatives to legacy systems that struggle with high parallelism and metadata-intensive workloads. Pures new offering stands out with its AI-native architecture, disaggregated design, and support for third-party arrays a rare combination that balances performance with deployment flexibility.Pures new FlashBlade//EXA architecture mirrors approaches from companies like WEKA or VAST Data, each of which has found early success in the AI training market. However, a key difference is that those companies don't focus on flash storage or have the same control over the underlying platform as Pure.The broader implication, however, goes beyond product innovation. FlashBlade//EXA reflects Pures evolving strategy, aligning more closely with the hyperscale market, where software-defined infrastructure and modularity are key. The Meta partnership underscores this shift and positions Pure as a serious contender for the future of AI infrastructure.FlashBlade//EXA and the Meta partnership highlight Pures broader strategic direction: delivering software-defined, flash-native platforms that unify performance, efficiency, and scale. As enterprises and hyperscalers prepare for an AI-driven future, Pure Storage is positioning itself as a storage vendor and a key enabler of next-generation digital infrastructure. It's a challenging journey one that Pure is well-equipped to make.Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm, that engages in, or has engaged in, research, analysis and advisory services with many technology companies; the author has provided paid services to every company named in this article in the past and may again in the future. Mr. McDowell does not hold any equity positions with any company mentioned.
    0 Kommentare ·0 Anteile ·26 Ansichten
  • Google Wants AI To Process Police Data Requests. Its Not Going Well.
    www.forbes.com
    Google is hoping AI can deal with a glut of law enforcement requests for user data, but AI isn't yet up to the job.Newscast/Universal Images Group via Getty Images)Googles efforts to use artificial intelligence to more quickly process data requests from government and law enforcement agencies arent going as well as the company had hoped.Endlessly swamped by a deluge of demands from police across the world (there were 236,000 in the first six months of 2024 alone), the company has increasingly been looking to AI to wrangle the mounting backlog of court orders and data requests faced by its long suffering Legal Investigations Support (LIS) team. Current and former members of that team told Forbes that Google engineers had been working on tools that could ingest court orders, subpoenas and other official requests, before going to find the relevant data for an LIS member to review. In theory they would significantly speed up the manual work of an LIS staffer. One source familiar with the matter told Forbes the backlog for requests is in the thousands.Clearly the solution is to hire more people to deal with these requests, but Google is deploying AI slop instead.But those tools have so far failed to do what is needed of them, sources said. Though the AI was trained on the work done by the LIS team, it has not yet been able to replicate it. And now 10 engineers charged with developing the AI have been sacked and the fate of the project has been thrown into doubt, staff told Forbes.One staffer said the AI hasnt yet been deployed and the layoffs were going to further delay them. Another added, Calling any of our current tooling AI feels like a stretch. As Forbes had previously reported, a trial of the technology had actually created more work, because any requests processed by AI had to be double checked and often redone by humans before being released to law enforcement.Google declined comment on the departures and would not answer questions about AI as a solution for its law enforcement request management problem. Google spokesperson Alex Krasov would say only that the company continued to make changes to operate more efficiently without changing the way we receive or assess law enforcement requests.Cooper Quintin, senior public interest technologist at the Electronic Frontier Foundation, told Forbes he thinks it's a bad idea to use AI for any kind of legal process because of models tendency to hallucinate, making up information. He pointed to a bevy of recent cases where judges warned lawyers about using AI to write up their filings after the software fabricated legal citations. Clearly the solution is to hire more people to deal with these requests, but Google is deploying AI slop instead, Quintin said.If the AI isnt capable of properly parsing lawful police requests, how can we trust it to detect fraudulent ones, Quintin asked. There have been numerous cases in which criminals have pretended to work for a police department and forged court orders and emergency requests to pilfer personal information that could be used to locate, stalk or harass individuals. In November last year, the FBI warned about an uptick in hackers using compromised government email accounts to make such fraudulent requests.Google already has a problem with responding to fake orders and reports, Quintin added. I think an AI system like this will exacerbate that issue.MORE ON FORBES
    0 Kommentare ·0 Anteile ·23 Ansichten
  • Nvidia explains its ambitious shift from graphics leader to AI infrastructure provider
    www.techspot.com
    The big picture: A big challenge in analyzing a rapidly growing company like Nvidia is making sense of all the different businesses it participates in, the numerous products it announces, and the overall strategy it's pursuing. Following the keynote speech by CEO Jensen Huang at the company's annual GTC Conference this year, the task was particularly daunting. As usual, Huang covered an enormous range of topics over a lengthy presentation and, frankly, left more than a few people scratching their heads. However, during an enlightening Q&A session with industry analysts a few days later, Huang shared several insights that suddenly made all the various product and partnership announcements he covered, as well as the thinking behind them, crystal clear.In essence, he said that Nvidia is now an AI infrastructure provider, building a platform of hardware and software that large cloud computing providers, tech vendors, and enterprise IT departments can use to develop AI-powered applications.Needless to say, that's an extraordinarily far cry from its role as a provider of graphics chips for PC gaming, or even from its efforts to help drive the creation of machine learning algorithms. Yet, it unifies several seemingly disparate announcements from recent events and provides a clear indication of where the company is heading.Nvidia is moving beyond its origins and its reputation as a semiconductor design house into the critical role of an infrastructure enabler for the future world of AI-powered capabilities or, as Huang described it, an "intelligence manufacturer."In his GTC keynote, Huang discussed Nvidia's efforts to enable efficient generation of tokens for modern foundation models, linking these tokens to intelligence that organizations will leverage for future revenue generation. He described these initiatives as building an AI factory, relevant to an extensive range of industries. // Related StoriesAlthough ambitious, the signs of an emerging information-driven economy and the efficiencies AI brings to traditional manufacturing are becoming increasingly evident. From businesses built entirely around AI services (like ChatGPT) to robotic manufacturing and distribution of traditional goods, we are undoubtedly moving into a new economic era.In this context, Huang extensively outlined how Nvidia's latest offerings facilitate faster and more efficient token creation. He initially addressed AI inference, commonly considered simpler than the AI training processes that initially brought Nvidia into prominence. However, Huang argued that inference, particularly when used with new chain-of-thought reasoning models such as DeepSeek R1 and OpenAI's o1, will require approximately 100 times more computing resources than current one-shot inference methods. Consequently, there's little concern that more efficient language models will reduce the demand for computing infrastructure. Indeed, we remain in the early stages of AI factory infrastructure development.One of Huang's most important yet least understood announcements was a new software tool called Nvidia Dynamo, designed to enhance the inference process for advanced models. Dynamo, an upgraded version of Nvidia's Triton Inference Server software, dynamically allocates GPU resources for various inference stages, such as prefill and decode, each with distinct computing requirements. It also creates dynamic information caches, managing data efficiently across different memory types.Operating similarly to Docker's orchestration of containers in cloud computing, Dynamo intelligently manages resources and data necessary for token generation in AI factory environments. Nvidia has dubbed Dynamo the "OS of AI factories." Practically speaking, Dynamo enables organizations to handle up to 30 times more inference requests with the same hardware resources.Of course, it wouldn't be GTC if Nvidia didn't also have chip and hardware announcements and there were plenty this time around. Huang presented a roadmap for future GPUs, including an update to the current Blackwell series called Blackwell Ultra (GB300 series), offering enhanced onboard HBM memory for improved performance.He also unveiled the new Vera Rubin architecture, featuring a new Arm-based CPU called Vera and a next-generation GPU named Rubin, each incorporating significantly more cores and advanced capabilities. Huang even hinted at the generation beyond that named after mathematician Richard Feynman projecting Nvidia's roadmap into 2028 and beyond.During the subsequent Q&A session, Huang explained that revealing future products well in advance is crucial for ecosystem partners, enabling them to prepare adequately for upcoming technological shifts.Huang also emphasized several partnerships announced at this year's GTC. The significant presence of other tech vendors demonstrated their eagerness to participate in this growing ecosystem. On the compute side, Huang explained that fully maximizing AI infrastructure required advancements in all traditional computing stack areas, including networking and storage.To that end, Nvidia unveiled new silicon photonics technology for optical networking between GPU-accelerated server racks and discussed a partnership with Cisco. The Cisco partnership enables Cisco silicon in routers and switches designed for integrating GPU-accelerated AI factories into enterprise environments, along with a shared software management layer.For storage, Nvidia collaborated with leading hardware providers and data platform companies, ensuring their solutions could leverage GPU acceleration, thus expanding Nvidia's market influence.And finally, building on the diversification strategy, Huang introduced more work that the company is doing for autonomous vehicles (notably a deal with GM) and robotics, both of which he described as part of the next big stage in AI development: physical AI.Nvidia knows that being an infrastructure and ecosystem provider means that they can benefit both directly and indirectly as the overall tide of AI computing rises, even as their direct competition is bound to increaseNvidia has been providing components to automakers for many years now and, similarly, has had robotics platforms for several years as well. What's different now, however, is that they're being tied back to AI infrastructure that can be used to better train the models that will be deployed into those devices, as well as providing the real-time inferencing data that's needed to operate them in the real world. While this tie back to infrastructure is arguably a relatively modest advance, in the bigger context of the company's overall AI infrastructure strategy, it does make more sense and helps tie together many of the company's initiatives into a cohesive whole.Making sense of all the various elements that Huang and Nvidia unveiled at this year's GTC isn't simple, particularly because of the firehose-like nature of all the different announcements and the much broader reach of the company's ambitions. Once the pieces do come together, however, Nvidia's strategy becomes clear: the company is taking on a much larger role than ever before and is well-positioned to achieve its ambitious objectives.At the end of the day, Nvidia knows that being an infrastructure and ecosystem provider means that they can benefit both directly and indirectly as the overall tide of AI computing rises, even as their direct competition is bound to increase. It's a clever strategy and one that could lead to even greater growth for the future.Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech
    0 Kommentare ·0 Anteile ·23 Ansichten
  • www.techspot.com
    Calibre is an e-book library manager. It can view, convert and catalog e-books in most of the major e-book formats. It can also talk to many e-book reader devices. It can go out to the Internet and fetch metadata for your books. It has a cornucopia of features divided into the following main categories:Library ManagementE-book conversionSyncing to e-book reader devicesDownloading news from the web and converting it into e-book formComprehensive e-book viewerContent server for online access to your book collectionIs Calibre an e-book reader?No, Calibre is an e-book manager in which you can organize existing e-books into virtual libraries, displaying, editing, creating and converting e-books, as well as syncing e-books with a variety of e-readers.Can I write an e-book with Calibre?Yes. Calibre can turn your personal documents to e-books or create them from scratch. It has automatic style helpers and scripts generating the book's structure.Which devices does Calibre support?Calibre is compatible with almost any e-reader, phone or tablet, as well as Windows, Mac and Linux devices. You can transfer your e-books from one device to another in seconds. Calibre will send the best file format for your device, converting it if needed, automatically.What formats does Calibre support conversion to/from?Calibre supports the conversion of many input formats to many output formats such as:Input Formats: AZW, AZW3, AZW4, CBZ, CBR, CB7, CBC, CHM, DJVU, DOCX, EPUB, FB2, FBZ, HTML, HTMLZ, LIT, LRF, MOBI, ODT, PDF, PRC, PDB, PML, RB, RTF, SNB, TCR, TXT, TXTZOutput Formats: AZW3, EPUB, DOCX, FB2, HTMLZ, OEB, LIT, LRF, MOBI, PDB, PMLZ, RB, PDF, RTF, SNB, TCR, TXT, TXTZ, ZIP.Can Calibre read RSS feeds?Yes, Calibre can deliver news to your device from hundreds of news sources or any RSS feed.Does Calibre offer cloud storage for my library?Calibre doesn't offer cloud storage, but it does integrate with most major cloud providers, including Google Drive, Dropbox, and OneDrive. This way, you can set up your eBook library in the cloud and access the content from your phone or tablet.What's NewNew featuresMuch improved Kobo support Calibre can now natively edit, view and convert KEPUB format files used by the Kobo. It also automatically converts EPUB to KEPUB when sending books to Kobo devices (can be configured by right clicking the kobo icon in calibre).Connect to folder: Allow connecting a specific device Calibre can now connect to a folder and treat it as though it is a USBMS based device. This is useful particularly on Chromebooks where USB devices appear as folders rather than actual devices.When completing names for fields that contain hierarchical data in prefix mode match prefixes after every period. Closes tickets: 2099780ToC editor: Allow moving of multiple selected items in the Table of ContentsmacOS: The calibre application icons in the dock are now displayed in a white frame to follow Apple's current recommended icon styleKobo driver: Add support for new firmware on Tolino devicesBook details: Add option in to suppress author search linksBug fixesFix a regression that broke tabbing to edit cells in the book list when some columns have been hidden or re-orderedCatalog generation: Allow using templates that access the database for notesFix a bug when renaming authors to a name with commas in itFull text search: Also index text in ZIP and RAR archives as these can be viewed by the calibre viewer. Closes tickets: 2100891E-book viewer: Fix Table of Contents current entry tracking not working for some books. Closes tickets: 2099678When reading metadata from HTML also recognize name="subject" meta tags as calibre tagsE-book viewer: Fix viewer not closing on the interrupt signal. Closes tickets: 2099777Edit book: Download external resources: Fix incorrect filename if the server returns a generic Content-Type header. Closes tickets: 2099754Metadata download: Publisher/series transform rules: Fix values with commas in them not working. Closes tickets: 2098620Version 8.0.1 fixes a failure to start on systems where the user had previously installed the KoboTouchExtended plugin and disabled the builtin KoboTouch driverImproved news sourcesLinux Weekly NewsSpectatorEconomistGrantaHindu1843BarronsFrontlineZaobaoStrange Horizons
    0 Kommentare ·0 Anteile ·28 Ansichten
  • We loved the Sonos Beam Gen 2, and today its on sale for $400
    www.digitaltrends.com
    Its Sonos sales week! (Thats the unofficial name were sticking with.) Sonos wireless speakers, soundbars, subwoofers, and a few other devices are discounted this week, including the Sonos Beam (Gen 2). For as long as this sale lasts, youll be able to purchase the latest version of the Beam for only $400. Thats a $100 discount off the soundbars MSRP.When we tested the Beam (Gen 2) a couple of years back, resident AV expert Simon Cohen said, Dolby Atmos adds a splash of 3D fun to an already excellent soundbar.The Beam (Gen 2) is classed as a 5.0 soundbar, which means the bar itself contains five speakers and no dedicated subwoofer. Thanks to two side-firing drivers and decent bass performance, the Beam is an excellent TV speaker replacement and effective surround sound emulator. And thanks to some clever three-dimensional engineering and codec support, the bar can even virtualize a Dolby Atmos setup (unfortunately, DTS:X isnt supported) when connected to your TV via HDMI eARC.RelatedThe other big thrill of owning any Sonos product is being able to use the Sonos S2 app to stream music from go-to platforms like Spotify, Apple Music, and Tidal. Youll also be able to group and un-group Sonos components, calibrate the Beams audio via Trueplay (iOS only), download system updates, and more. Youll even be able to use AirPlay 2 to instantly cast tunes from your iPhone to the Beam.Heres hoping this Sonos sale lasts a while longer, though were expecting it to wrap up by weeks end or sooner. Save $100 when you purchase the Sonos Beam (Gen 2) right now, and be sure to take a look at our roundups of the best Sonos deals, best soundbar deals, and best Bluetooth speaker deals for additional markdowns on wireless audio devices!Editors Recommendations
    0 Kommentare ·0 Anteile ·22 Ansichten
  • Google Messages could soon let you watch YouTube right in the chat
    www.digitaltrends.com
    Google looks like its getting ready to bring back a fan-favorite feature in its Messages app: the YouTubeminiplayer. After quietly pulling the plug on it last year, the company seems to be rethinking things, aiming for a smoother way to share and watch videos right inside your chats.The YouTube miniplayer let you play videos without ever leaving Messages without any app-switching or interruptions. First introduced in 2022, it vanished in 2024 with zero fanfare. Instead of watching videos in-line chatters were bumped out to the YouTube app. The reaction to its removal last year was a mixed bag. Some users didnt mind heading straight to YouTube, but plenty missed the convenience of staying in the chat.Recommended VideosNow, fresh clues from recent app teardowns by researcher AssembleDebug at Android Authoritypoint to the feature making a potential comeback. Code spotted in the latest beta builds hints that Google could be working on a new and improved picture-in-picture player for YouTube links inside Messages. Specifically, Messages beta version 20250319 reveal strings of code that refer to YouTube in particular.Please enable Javascript to view this contentThis lines up with YouTubes own miniplayer redesign from late 2024, which added better resizing, repositioning, and multitasking. AssembleDebug was able to bring back the picture-in-picture player in Messages using an activity toggle, but videos did not play when tested. This does make it seem more likely, however, that the intention is to restore functionality.Googles move to bring the player back shows theyre listening to user feedback. Theres no official word on when the feature could roll out for everyone, and its still unclear if youll be able to turn it off if youd rather open videos externally.For millions of Android users who rely on Messages to juggle conversations, this is one of those small updates that could make things feel a bit more effortless, so its something to keep an eye on. With the recent switch to RCS messaging, it could be one of many changes to make the app sweeter, like the recent update that lets you unsend messages.Editors Recommendations
    0 Kommentare ·0 Anteile ·39 Ansichten