0 Commentarii
0 Distribuiri
Director
Director
-
Vă rugăm să vă autentificați pentru a vă dori, partaja și comenta!
-
WWW.FASTCOMPANY.COMOpenAI shouldnt accept Elon Musks $97 billion bid to buy itLets say you own one of the most valuable homes in a lush, gated community that has been earmarked as a future point of growth for decades to come. One day, a letter appears in your mailbox, offering to buy your property for between a third and two-thirds of its value on the open market.On the face of it, you should turn it down. But the person offering to buy it owns every house in the estate, and runs the HOA. Theyre also friends with the police chief and the fire department. So you have to think carefully.Thats the situation OpenAI CEO Sam Altman finds himself in today as an Elon Musk-led group launches an audacious bid to buy the nonprofit arm of OpenAI, the hottest ticket in tech, for $97.4 billion. The bid, first reported by the Wall Street Journal, is undoubtedly a cheeky one. OpenAI was last valued at $157 billion late last year when it last went into the market to seek investment. And just this week, SoftBank, the Japanese investment company, valued it at $260 billion.That makes Musks bid to take over the company a cut-price one, worth significantly less than the going market rate. The idea that OpenAIwhich has spent the past year or more in an on-again, off-again court case against Musk over an argument dating back a decade to the latters involvement in setting up the AI company as a nonprofitwould accept the offer for the nonprofit arm seems preposterous.Yet, we are in an era where Elon Musk has emerged as a right-hand man for Donald Trump. Things are no longer normal in politics or business, and Trump sees OpenAI as a strategically important business for the United States. (For proof, just look at his recently announced AI project, the Stargate Project.)Its time for OpenAI to return to the open-source, safety-focused force for good it once was, Musk said in a statement made through his attorney announcing the bid. We will make sure that happens.Well soon find out whether thats the bluster of a businessman who has long made audacious bets (many of which have paid off), or the commentary of someone whose quasi-governmental position allows him to exert power. For now at least, Sam Altman appears to be treating it like the joke it is. no thank you but we will buy twitter for $9.74 billion if you want, he quickly tweeted.no thank you but we will buy twitter for $9.74 billion if you want Sam Altman (@sama) February 10, 20250 Commentarii 0 Distribuiri
-
WWW.CREATIVEBLOQ.COMFrom Apple to Airbnb, retro icon design is making a comebackSkeuomorphism is back with a vengeance.0 Commentarii 0 Distribuiri
-
GAMINGBOLT.COMResident Evil 5 Rated for Xbox Series X/S by ESRBMany are wondering how much longer itll be before Capcom unveils the next mainlineResident Evilrelease, but before that happens, it seems like the company is planning to bring back some of the series older instalments.Specifically, it seems like a native re-release for Resident Evil 5is on the cards. The 2009 action horror title has been rated for Xbox Series X/S by the ESRB. Whether a PS5 version is also in the works remains to be seen, but at the very least, it seems like the game is at least coming to Microsofts current-gen platform.Resident Evil 5is, of course, already playable on both Xbox Series X/S and PS5, but only through backward compatibility of its Xbox One and PS4 versions respectively.Interestingly, recent weeks have also seen ESRB ratings for Xbox Series X/S versions ofResident Evil 6andResident Evil Origins Collection. The latter, for those unaware, includes HD remasters of the originalResident Evilremake andResident Evil Zero.Presumably, Capcom will have plenty of re-releases to announce at some point in the future, though at this point, thats little more than speculation. Stay tuned for more details in the coming weeks.0 Commentarii 0 Distribuiri
-
WWW.MARKTECHPOST.COMLLMDet: How Large Language Models Enhance Open-Vocabulary Object DetectionOpen-vocabulary object detection (OVD) aims to detect arbitrary objects with user-provided text labels. Although recent progress has enhanced zero-shot detection ability, current techniques handicap themselves with three important challenges. They heavily depend on expensive and large-scale region-level annotations, which are hard to scale. Their captions are typically short and not contextually rich, which makes them inadequate in describing relationships between objects. These models also lack strong generalization to new object categories, mainly aiming to align individual object features with textual labels instead of using holistic scene understanding. Overcoming these limitations is essential to pushing the field further and developing more effective and versatile vision-language models.Previous methods have tried to enhance OVD performance by making use of vision-language pretraining. Models such as GLIP, GLIPv2, and DetCLIPv3 combine contrastive learning and dense captioning approaches to promote object-text alignment. However, these techniques still have important issues. Region-based captions only describe a single object without considering the entire scene, which confines contextual understanding. Training involves enormous labeled datasets, so scalability is an important issue. Without a way to understand comprehensive image-level semantics, these models are incapable of detecting new objects efficiently.Researchers from Sun Yat-sen University, Alibaba Group, Peng Cheng Laboratory, Guangdong Province Key Laboratory of Information Security Technology, and Pazhou Laboratory propose LLMDet, a novel open-vocabulary detector trained under the supervision of a large language model. This framework introduces a new dataset, GroundingCap-1M, which consists of 1.12 million images, each annotated with detailed image-level captions and short region-level descriptions. The integration of both detailed and concise textual information strengthens vision-language alignment, providing richer supervision for object detection. To enhance learning efficiency, the training strategy employs dual supervision, combining a grounding loss that aligns text labels with detected objects and a caption generation loss that facilitates comprehensive image descriptions alongside object-level captions. A large language model is incorporated to generate long captions describing entire scenes and short phrases for individual objects, improving detection accuracy, generalization, and rare-class recognition. Additionally, this approach contributes to multi-modal learning by reinforcing the interaction between object detection and large-scale vision-language models.The training pipeline consists of two primary stages. First, a projector is optimized to align the object detectors visual features with the feature space of the large language model. In the next stage, the detector undergoes joint fine-tuning with the language model using a combination of grounding and captioning losses. The dataset used for this training process is compiled from COCO, V3Det, GoldG, and LCS, ensuring that each image is annotated with both short region-level descriptions and extensive long captions. The architecture is built on the Swin Transformer backbone, utilizing MM-GDINO as the object detector while integrating captioning capabilities through large language models. The model processes information at two levels: region-level descriptions categorize objects, while image-level captions capture scene-wide contextual relationships. Despite incorporating an advanced language model during training, computational efficiency is maintained as the language model is discarded during inference.This approach attains state-of-the-art performance over a range of open-vocabulary object detection benchmarks, with greatly improved detection accuracy, generalization, and robustness. It surpasses prior models by 3.3%14.3% AP on LVIS, with clear improvement in the identification of rare classes. On ODinW, a benchmark for object detection over a range of domains, it shows better zero-shot transferability. Robustness to domain transition is also confirmed through its improved performance on COCO-O, a dataset measuring performance under natural variations. In referential expression comprehension tasks, it attains the best accuracy on RefCOCO, RefCOCO+, and RefCOCOg, affirming its capacity for textual description alignment with object detection. Ablation experiments show that image-level captioning and region-level grounding in combination make significant contributions to performance, especially in object detection for rare objects. As well, incorporating the learned detector into multi-modal models improves vision-language alignment, suppresses hallucinations, and advances accuracy in visual question-answering.By using large language models in open-vocabulary detection, LLMDet provides a scalable and efficient learning paradigm. This development remedies the primary challenges to existing OVD frameworks, with state-of-the-art performance on several detection benchmarks and improved zero-shot generalization and rare-class detection. Vision-language learning integration promotes cross-domain adaptability and enhances multi-modal interactions, showing the promise of language-guided supervision in object detection research.Check outthePaper.All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our75k+ ML SubReddit. Aswin AkAswin AK is a consulting intern at MarkTechPost. He is pursuing his Dual Degree at the Indian Institute of Technology, Kharagpur. He is passionate about data science and machine learning, bringing a strong academic background and hands-on experience in solving real-life cross-domain challenges.Aswin Akhttps://www.marktechpost.com/author/aswinak/Sundial: A New Era for Time Series Foundation Models with Generative AIAswin Akhttps://www.marktechpost.com/author/aswinak/Process Reinforcement through Implicit Rewards (PRIME): A Scalable Machine Learning Framework for Enhancing Reasoning CapabilitiesAswin Akhttps://www.marktechpost.com/author/aswinak/Meta AI Introduces MILS: A Training-Free Multimodal AI Framework for Zero-Shot Image, Video, and Audio UnderstandingAswin Akhttps://www.marktechpost.com/author/aswinak/4 Open-Source Alternatives to OpenAIs $200/Month Deep Research AI Agent [Recommended] Join Our Telegram Channel0 Commentarii 0 Distribuiri
-
TOWARDSAI.NETHow AI is Quietly Eating the InternetHow AI is Quietly Eating the Internet 0 like February 11, 2025Share this postAuthor(s): Mukundan Sankar Originally published on Towards AI. Every Website, Every App, Every Piece of Content Youre Already Consuming AI-Generated Information, and You Dont Even Know It.This member-only story is on us. Upgrade to access all of Medium.Image Created by the Author using ChatGPTThe internet is no longer human-driven. Every search result you see, every news article you read, every product recommendation you get its all shaped, ranked, or outright generated by AI. And its happening so seamlessly that you dont even notice.AI isnt just an assistant anymore; its the architect of the digital experience. But how did we get here? And what does it mean for the way we consume and trust online information?When you Google something, youre not getting an organic list of the best results. Youre getting what AI thinks is best for you.Googles search algorithm is powered by AI models like RankBrain and BERT, which predict what you meant to search for not just what you typed. Increasingly, search engines are shifting toward AI-generated answers rather than listing human-written sources. The rise of AI-powered search tools like Perplexity AI and ChatGPTs browsing features means the future of search might not even include traditional links at all.For content creators, this is a seismic shift. If AI decides what gets seen, how do you ensure your content stays relevant?A study published suggests Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Commentarii 0 Distribuiri
-
TOWARDSAI.NETA Neural Sparse Graphical Model for Variable Selection and Time-Series Network AnalysisLatestMachine LearningA Neural Sparse Graphical Model for Variable Selection and Time-Series Network Analysis 0 like February 10, 2025Share this postLast Updated on February 11, 2025 by Editorial TeamAuthor(s): Shenggang Li Originally published on Towards AI. A Unified Adjacency Learning and Nonlinear Forecasting Framework for High-Dimensional DataThis member-only story is on us. Upgrade to access all of Medium.Photo by Susan Q Yin on UnsplashImagine a spreadsheet with rows of timestamps and columns labeled x_1, x_2, . Each x_n might represent a products sales, a stocks price, or a genes expression level. But these variables rarely evolve in isolation they often influence one another, sometimes with notable time lags. To handle these interactions, we need a robust Time-Series Network that models how each variable behaves in relation to the others. This paper focuses on precisely that objective.For instance, last months dip in x_1 could trigger a spike in x_2 this month, or perhaps half these columns are simply noise that drowns out the key relationships I want to track.My quest was to figure out how to select the most important variables and build a reliable model of how each x_m depends on the others over time. For example, is x_m mostly driven by x_1 and x_2 from the previous day, or does it depend on all variables from the previous week? I looked into various ideas, like Graph Neural Networks (GNNs) to capture who influences whom, structural modeling for domain-specific equations, or more exotic approaches like Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Commentarii 0 Distribuiri
-
THEHACKERNEWS.COMApple Patches Actively Exploited iOS Zero-Day CVE-2025-24200 in Emergency UpdateApple on Monday released out-of-band security updates to address a security flaw in iOS and iPadOS that it said has been exploited in the wild.Assigned the CVE identifier CVE-2025-24200, the vulnerability has been described as an authorization issue that could make it possible for a malicious actor to disable USB Restricted Mode on a locked device as part of a cyber physical attack.This suggests that the attackers require physical access to the device in order to exploit the flaw. Introduced in iOS 11.4.1, USB Restricted Mode prevents an Apple iOS and iPadOS device from communicating with a connected accessory if it has not been unlocked and connected to an accessory within the past hour.The feature is seen as an attempt to prevent digital forensics tools like Cellebrite or GrayKey, which are mainly used by law enforcement agencies, from gaining unauthorized entry to a confiscated device and extracting sensitive data.In line with advisories of this kind, no other details about the security flaw are currently available. The iPhone maker said the vulnerability was addressed with improved state management.However, Apple acknowledged that it's "aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals."Security researcher Bill Marczak of The Citizen Lab at The University of Toronto's Munk School has been credited with discovering and reporting the flaw.The update is available for the following devices and operating systems -iOS 18.3.1 and iPadOS 18.3.1 - iPhone XS and later, iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and lateriPadOS 17.7.5 - iPad Pro 12.9-inch 2nd generation, iPad Pro 10.5-inch, and iPad 6th generationThe development comes weeks after Cupertino resolved another security flaw, a use-after-free bug in the Core Media component (CVE-2025-24085), that it revealed as having been exploited against versions of iOS before iOS 17.2.Zero-days in Apple software have been primarily weaponized by commercial surveillanceware vendors to deploy sophisticated programs that can extract data from victim devices.While these tools, such as NSO Group's Pegasus, are marketed as "technology that saves lives" and combat serious criminal activity as a way to get around the so-called "Going Dark" problem, they have also been misused to spy on members of the civil society.NSO Group, for its part, has reiterated that Pegasus is not a mass surveillance tool and that it's licensed to "legitimate, vetted intelligence and law enforcement agencies."In its transparency report for 2024, the Israeli company said it serves 54 customers in 31 countries, of which 23 are intelligence agencies and another 23 are law enforcement agencies.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.0 Commentarii 0 Distribuiri
-
WWW.BDONLINE.CO.UKYouth-designed pavilion unveiled in Camdens HS2 meanwhile gardenA new pavilion and performance space has been installed in a temporary public garden in Camden, co-designed by young people in collaboration with social enterprise MATT+FIONA and Fitzrovia Youth in Action. The six-metre-tall structure, called Reflect, is intended as a focal point for the spaceSource: Nick TurpinSource: Nick TurpinSource: Nick Turpin1/3show captionA new public pavilion, designed in collaboration with young people, has been installed in a temporary green space created by HS2 in Camden. The structure, named Reflect, was developed through a youth-led co-design process led by social enterprise MATT+FIONA in partnership with Fitzrovia Youth in Action, a local charity.The site, previously occupied by the National Temperance Hospital, is one of several plots in the Euston area designated for temporary use by HS2. This garden is the only one in the series to have been co-designed with local young people and the community.The project follows a community engagement programme on the Regents Park Estate, which identified a need for recreational and safe green spaces for young people. Forty-eight participants initially contributed ideas, with a core group of 18 young placemakers developing the final design over 12 weeks. The structure was fabricated by the young designers during a practical workshop at the Euston Skills Centre.Ellie Rudd, youth leadership and Regents Park community champions manager at Fitzrovia Youth in Action, said: It has been really exciting to see a renewed commitment to youth-led decision making and co-production, in not only planning and design, but also in the physical creation of this meanwhile-use space.Rising six metres, the structure comprises a blue timber stage with seating and has been designed to accommodate wheelchair users. It incorporates steel, timber, and stainless steel, with shaped panels fabricated by the young participants. The upper section features mirrored surfaces intended to create visual effects, reflecting performers back to the audience.Young participants engaged in the design processSource: Jon ShmulevitchA young participant engaged in the build processSource: Jon Shmulevitch1/2show captionMatthew Springett, director at MATT+FIONA, said: The youth-led co-design process for this project brought the design ideas and vision for a new playable landscape to the fore. This unique project demonstrates the true value that comes from trusting young people to contribute to the shaping of the public spaces that we all share.The wider garden has been designed by LDA Design and includes a maze of long grasses, parterre gardens, and naturalistic planting aimed at increasing biodiversity. The design also reuses material from the former construction site, integrating the concrete foundation slabs of the HS2 compound into the landscape.Dafydd Warburton, design director and project lead at LDA Design, said: We wanted to create a fun but also high-quality space, reusing and recycling wherever possible.The pavilion is intended to be relocated to the Regents Park Estate when the site is redeveloped as part of HS2s masterplan.0 Commentarii 0 Distribuiri
-
WWW.FORBES.COMTodays NYT Mini Crossword Clues And Answers For Tuesday, February 11thLooking for help with today's NYT Mini Crossword puzzle? Here are some hints and answers for the ... [+] puzzle.Credit: NYTIn case you missed Mondays NYT Mini Crossword puzzle, you can find the answers here:The NYT Mini is a smaller, quicker, more digestible, bite-sized version of the larger and more challenging NYT Crossword, and unlike its larger sibling, its free-to-play without a subscription to The New York Times. You can play it on the web or the app, though youll need the app to tackle the archive.Spoilers ahead!ACROSS1- Yoga discipline with a name from Sanskrit HATHA6- ___ run (testing-out stage) TRIAL7- ___ run (jog in the woods) TRAIL8- Deflect an attack, in fencing PARRYMORE FOR YOU9- "Woo-hoo!" YAYDOWN1- Internet address starter HTTP2- Matrixlike grid ARRAY3- Headwear for a princess TIARA4- Like Chewbacca and Mr. Snuffleupagus HAIRY5- Supporter of L.G.B.T.Q. rights ALLYToday's MINICredit: Erik KainThis one was pretty easy, though I had no idea about 1-ACROSS. That wasnt a big deal, as I quickly filled in the rest of the blanks until HATHA became apparent. The whole thing took 44 seconds.How did you do? Let me know on Twitter, Instagram or Facebook.If you also play Wordle, I write guides about that as well. You can find those and all my TV guides, reviews and much more here on my blog. Thanks for reading!0 Commentarii 0 Distribuiri