• WWW.POLYGON.COM
    Watching this YouTuber restore old consoles is oddly satisfying
    YouTube channel Odd Tinkering doesnt hold many surprises other than the disbelief at how good the consoles, controllers, and keyboards they restore look when the work is done. What starts as a broken, dirty Game Boy Color in this video becomes a crisp, clean handheld fit to be a 1998 Christmas present.The creator, seemingly based in Finland, keeps their name off their channel but that doesnt mean theyre totally mysterious to fans. Subscribers often send in their water-damaged, Cheeto-finger-stained, cracked gaming devices to be repaired, and the process is satisfying and nostalgic. (That makes this channel a great stoned watch, by the way.) It seems like the person restoring the items is a true gamer, too some of the projects are deep cuts, like the Pokmon Mini.The depth of knowledge around preserving and restoring sometimes sensitive materials, like processing chips and porous plastic, is a huge part of the fun with Odd Tinkerings videos. That, and the interesting captions in English that let you occasionally learn something about how keyboards or consoles are made, or how to modernize a piece of old tech you wish you could still use. Theres something endearing, too, about going to such great lengths just to preserve a crappy 2005 office keyboard. Those things we use every day do matter. Its also clear theres a real passion for restoration; plenty of videos on the channel show restorations of antiques like this rusty straight razor. After all, its about the tinkering and the oddity, not the items themselves.
    0 Σχόλια 0 Μοιράστηκε 150 Views
  • WWW.TECHRADAR.COM
    Thousands of widely-used public workspaces are leaking data
    Major platforms impacted include GitHub, Slack, and Salesforce.
    0 Σχόλια 0 Μοιράστηκε 120 Views
  • WWW.DEZEEN.COM
    Mnard Dworkind combines bold yellows with muted tones in Montreal restaurant
    Local studio Mnard Dworkind has outfitted an irregularly shaped Vietnamese restaurant interior with blue and white mosaic tile and yellow accents in Montreal.Le Red Tiger is in a slanted storefront in Montreal's Shop Angus district the restaurant's second location.Mnard Dworkind has created an interior design for Le Red Tiger restaurant in MontrealThe design works to preserve the lively spirit and energy of its predecessor, which opened in 2015 in the Village neighbourhood.Montreal-based Mnard Dworkind helped create a warm, festive background for the immersive culinary experience where over 100 guests can dine on cuisine inspired by Vietnamese street food and a selection of cocktails.It features a tile floor that creates a sense of movement"The idea was to let restaurant goers bite into what Le Red Tiger is serving and take a trip through Vietnam, no airfare required" co-founder Guillaume Mnard told Dezeen.The interiors blend utilitarian necessities like the exposed chrome duct work with vintage touches to evoke nostalgia."Le Red Tiger's interior design strikes a perfect balance between modernity and vintage charm."A mix of different seating types correspond with changes in elevationThe custom-made ceramic mosaic floor creates a sense of movement with blocky zig-zags in light blue and black on a white background. Meanwhile, black walnut furniture and flooring bring vintage warmth to the space.The lime plaster and beige square tile walls fade into the background as a neutral element.It has an open kitchenThe restaurant makes the most of its small, irregular footprint with platform dining spaces raised above the main floor. The multiple layers add both depth and perspective to the dining area.An open kitchen runs along the party wall, allowing guests to interact with and watch the bartenders and chefs in action, while more kitchen prep space is sequestered to a back-of-house room.Lantern pendants are mixed with bright, yellow-painted metal onesAbove the yellow-accented bar on a walnut-wrapped bulkhead, hangs an orange neon sign that reads "cng ng cng vui" the more, the merrier as a nod to the restaurant's Vietnamese roots and philosophy of dining together.The centre of the plan features a long, angled community table made from British Columbia fir. Paired with industrial-style stools, the table invites guests to socialize and share under the suspended Asian lanterns.Read: Ivy Studio installs colourful marble in Montreals Hayat restaurantLight sage green and bold yellow doors play off the pale blue in the floor, which wraps up the wall for a 180-degree patterned surrounding in the hallway and restrooms.Continuing the trend of combining muted tones with bold colours, a soft coral-coloured banquette runs opposite the bar, wrapping the back corner of the raised dining space, where a built-in planter provides an organic relief from the geometric interiors.Combining multiple types of seating with varying textures and specific accent colour is a signature of Mnard Dworkind restaurant interior work.The tile in the dining room continues on the walls in the restroom areasThe studio has completed multiple other Montreal dining spaces including a mirrored Italian restaurant with leafy planters, a retro coffee bar with teal booths and checkerboard flooring, a French restaurant with custom wine storage and an acoustic inlay ceiling design and a New York-style pizzeria with white pine and green wall treatments.The photography is by Alex Lesage.Project credits:Architecture: MRDKDesign team: Guillaume Menard, Fabrice DoutriauxContractor: Group ManovraCeramic floor tile: DaltileLighting: Herman Miller HumanhomeFurniture: Keca, JussaumeFabric: CTL leatherThe post Mnard Dworkind combines bold yellows with muted tones in Montreal restaurant appeared first on Dezeen.
    0 Σχόλια 0 Μοιράστηκε 149 Views
  • APPLEINSIDER.COM
    Logitech Wave Keys for Mac review: Ergonomic without hurting your wallet
    Logitech's Wave Keys for Mac offers an ergonomic and quiet typing experience, and for a reasonable price too.Logitech Wave Keys for MacWorking at a desk for long periods can be harmful for your health, without the proper equipment. Many people really should be looking into making their computing environment more ergonomic, reducing the chance of aches and pains from prolonged use.When it comes to keyboards, there are a few options, but one of the more recognizable ergonomic ways is to use a wavy keyboard. One where the keys aren't in straight lines, and are at varied heights too to match your hand's resting positions. Continue Reading on AppleInsider | Discuss on our Forums
    0 Σχόλια 0 Μοιράστηκε 137 Views
  • ARCHINECT.COM
    Surveying Australian architecture's inspiring 2024 moment
    Both buildings attracted high praise for the way the architects had seamlessly incorporated local First Nations culture and history into their design, a practice an increasing number of Australian architects are prioritising in their work that is making the rest of the world sit up and take notice."Challenges to architects are also opportunities, because it means that their creativity is pushed and this is what were seeing a lot of from Australian architects," theWorld Architecture Festival (WAF)'s Paul Finch tells The Guardianin Kelly Burke's review of Indigenousinfluences and other inclusions powering Australia's strong showing on this year'sinternational competition and awards circuit.The WAF'sWorld Building of the Year winner Darlington Public School fromfjcstudio is mentioned as such along with Bates Smart's newAustralian Embassy building, a trio of 2024 National Architecture Awards winners,and the "solar skin" covered Kennon-designed550 Spencer Street tower in Melbourne.Two others on the horizonREX's copper screen facade205 North Quay office design and the newly topped-out Sydney Fish Markethallfrom 3XNappear ready to follow their example in 2025.
    0 Σχόλια 0 Μοιράστηκε 139 Views
  • VENTUREBEAT.COM
    Five breakthroughs that make OpenAIs o3 a turning point for AI and one big challenge
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreThe end of the year 2024 has brought reckonings for artificial intelligence, as industry insiders feared progress toward even more intelligent AI is slowing down. But OpenAIs o3 model, announced just last week, has sparked a fresh wave of excitement and debate, and suggests big improvements are still to come in 2025 and beyond.This model, announced for safety testing among researchers, but not yet released publicly, achieved an impressive score on the important ARC metric. The benchmark was created by Franois Chollet, a renowned AI researcher and creator of the Keras deep learning framework, and is specifically designed to measure a models ability to handle novel, intelligent tasks. As such, it provides a meaningful gauge of progress toward truly intelligent AI systems.Notably, o3 scored 75.7% on the ARC benchmark under standard compute conditions and 87.5% using high compute, significantly surpassing previous state-of-the-art results, such as the53% scored by Claude 3.5.This achievement by o3 represents a surprising advancement, according to Chollet, who had been a critic of the ability of large language models (LLMs) to achieve this sort of intelligence. It highlights innovations that could accelerate progress toward superior intelligence, whether we call it artificial general intelligence (AGI) or not.AGI is a hyped term, and ill-defined, but it signals a goal: intelligence capable of adapting to novel challenges or questions in ways that surpass human abilities.OpenAIs o3 tackles specific hurdles in reasoning and adaptability that have long stymied large language models. At the same time, it exposes challenges, including the high costs and efficiency bottlenecks inherent in pushing these systems to their limits. This article will explore five key innovations behind the o3 model, many of which are underpinned by advancements in reinforcement learning (RL). It will draw on insights from industry leaders, OpenAIs claims, and above all Chollets important analysis, to unpack what this breakthrough means for the future of AI as we move into 2025.The five core innovations of o31. Program synthesis for task adaptationOpenAIs o3 model introduces a new capability called program synthesis, which enables it to dynamically combine things that it learned during pre-trainingspecific patterns, algorithms, or methodsinto new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. Franois Chollet describes program synthesis as a systems ability to recombine known tools in innovative wayslike a chef crafting a unique dish using familiar ingredients. This feature marks a departure from earlier models, which primarily retrieve and apply pre-learned knowledge without reconfiguration and its also one that Chollet had advocated for months ago as the only viable way forward to better intelligence.2. Natural language program searchAt the heart of o3s adaptability is its use of Chains of Thought (CoTs) and a sophisticated search process that takes place during inferencewhen the model is actively generating answers in a real-world or deployed setting. These CoTs are step-by-step natural language instructions the model generates to explore solutions. Guided by an evaluator model, o3 actively generates multiple solution paths and evaluates them to determine the most promising option. This approach mirrors human problem-solving, where we brainstorm different methods before choosing the best fit. For example, in mathematical reasoning tasks, o3 generates and evaluates alternative strategies to arrive at accurate solutions. Competitors like Anthropic and Google have experimented with similar approaches, but OpenAIs implementation sets a new standard.3. Evaluator model: A new kind of reasoningO3 actively generates multiple solution paths during inference, evaluating each with the help of an integrated evaluator model to determine the most promising option. By training the evaluator on expert-labeled data, OpenAI ensures that o3 develops a strong capacity to reason through complex, multi-step problems. This feature enables the model to act as a judge of its own reasoning, moving large language models closer to being able to think rather than simply respond.4. Executing Its own programsOne of the most groundbreaking features of o3 is its ability to execute its own Chains of Thought (CoTs) as tools for adaptive problem-solving. Traditionally, CoTs have been used as step-by-step reasoning frameworks to solve specific problems. OpenAIs o3 extends this concept by leveraging CoTs as reusable building blocks, allowing the model to approach novel challenges with greater adaptability. Over time, these CoTs become structured records of problem-solving strategies, akin to how humans document and refine their learning through experience. This ability demonstrates how o3 is pushing the frontier in adaptive reasoning. According to OpenAI engineer Nat McAleese, o3s performance on unseen programming challenges, such as achieving a CodeForces rating above 2700, showcases its innovative use of CoTs to rival top competitive programmers. This 2700 rating places the model at Grandmaster level, among the top echelon of competitive programmers globally.5. Deep learning-guided program searchO3 leverages a deep learning-driven approach during inference to evaluate and refine potential solutions to complex problems. This process involves generating multiple solution paths and using patterns learned during training to assess their viability. Franois Chollet and other experts have noted that this reliance on indirect evaluationswhere solutions are judged based on internal metrics rather than tested in real-world scenarioscan limit the models robustness when applied to unpredictable or enterprise-specific contexts.Additionally, o3s dependence on expert-labeled datasets for training its evaluator model raises concerns about scalability. While these datasets enhance precision, they also require significant human oversight, which can restrict the systems adaptability and cost-efficiency. Chollet highlights that these trade-offs illustrate the challenges of scaling reasoning systems beyond controlled benchmarks like ARC-AGI.Ultimately, this approach demonstrates both the potential and limitations of integrating deep learning techniques with programmatic problem-solving. While o3s innovations showcase progress, they also underscore the complexities of building truly generalizable AI systems.The big challenge to o3 OpenAIs o3 model achieves impressive results but at significant computational cost, consuming millions of tokens per task and this costly approach is models biggest challenge. Franois Chollet, Nat McAleese, and others highlight concerns about the economic feasibility of such models, emphasizing the need for innovations that balance performance with affordability.The o3 release has sparked attention across the AI community. Competitors such as Google with Gemini 2 and Chinese firms like DeepSeek 3 are also advancing, making direct comparisons challenging until these models are more widely tested.Opinions on o3 are divided: some laud its technical strides, while others cite high costs and a lack of transparency, suggesting its real value will only become clear with broader testing. One of the biggest critiques came from Google DeepMinds Denny Zhou, who implicitly attacked the models reliance on reinforcement learning (RL) scaling and search mechanisms as a potential dead end, arguing instead that a model should be able to learn to reason from simpler fine-tuning processes. What this means for enterprise AIWhether or not it represents the perfect direction for further innovation, for enterprises, o3s new-found adaptability shows that AI will in one way or another continue to transform industries, from customer service and scientific research, in the future. Industry players will need some time to digest what o3 has delivered here. For enterprises concerned about o3s high computational costs, OpenAIs upcoming release of the scaled-down o3-mini version of the model provides a potential alternative. While it sacrifices some of the full models capabilities, o3-mini promises a more affordable option for businesses to experiment with retaining much of the core innovation while significantly reducing test-time compute requirements.It may be some time before enterprise companies can get their hands on the o3 model. OpenAI says the o3-mini is expected to launch by the end of January. The full o3 release will follow after, though the timelines depend on feedback and insights gained during the current safety testing phase. Enterprise companies will be well advised to test it out. Theyll want to ground the model with their data and use cases and see how it really works. But in the mean time, they can already use the many other competent models that are already out and well tested, including the flagship o4 model and other competing models many of which are already robust enough for building intelligent, tailored applications that deliver practical value.Indeed, next year, well be operating on two gears. The first is in achieving practical value from AI applications, and fleshing out what models can do with AI agents, and other innovations already achieved. The second will be sitting back with the popcorn and seeing how the intelligence race plays out and any progress will just be icing on the cake that has already been delivered.For more on o3s innovations, watch the full YouTube discussion between myself and Sam Witteveen below, and follow VentureBeat for ongoing coverage of AI advancements.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.
    0 Σχόλια 0 Μοιράστηκε 147 Views
  • VENTUREBEAT.COM
    2025s upcoming games poised to be best-sellers
    2025's games calendar is already packed with major titles. Which ones look like they're best-placed to capture gamers' attention?Read More
    0 Σχόλια 0 Μοιράστηκε 166 Views
  • WWW.THEVERGE.COM
    Is your iPhone sharing photos with Apple by default?
    Apple occasionally makes choices that tarnish its strong privacy-forward reputation, like when it was secretly collecting users Siri interactions. Yesterday, a blog post from developer Jeff Johnson highlighted such a choice: an Enhanced Visual Search toggle for the Apple Photos app that is seemingly on by default, giving your device permission to share data from your photos with Apple.Sure enough, when I checked my iPhone 15 Pro this morning, the toggle was switched to on. You can find it for yourself by going to Settings > Photos (or System Settings > Photos on a Mac). Enhanced Visual Search lets you look up landmarks youve taken pictures of or search for those images using the names of those landmarks.To see what it enables in the Photos app, swipe up on a picture youve taken of a building and select Look up Landmark, and a card will appear that ideally identifies it. Here are a couple of examples from my phone:Thats definitely Austins Cathedral of Saint Mary, but the image on the right is not a Trappist monastery, but the Dubuque, Iowa city hall building. Screenshots: Apple PhotosOn its face, its a convenient expansion of Photos Visual Look Up feature that Apple introduced in iOS 15 that lets you identify plants or, say, find out what those symbols on a laundry tag mean. But Visual Look Up doesnt need special permission to share data with Apple, and this does.A description under the toggle says youre giving Apple permission to privately match places in your photos with a global index maintained by Apple. As for how, there are details in an Apple machine-learning research blog about Enhanced Visual Search that Johnson links to:The process starts with an on-device ML model that analyzes a given photo to determine if there is a region of interest (ROI) that may contain a landmark. If the model detects an ROI in the landmark domain, a vector embedding is calculated for that region of the image. According to the blog, that vector embedding is then encrypted and sent to Apple to compare with its database. The company offers a very technical explanation of vector embeddings in a research paper, but IBM put it more simply, writing that embeddings transform a data point, such as a word, sentence or image, into ann-dimensional array of numbers representing that data points characteristics. Like Johnson, I dont fully understand Apples research blogs and Apple didnt immediately respond to our request for comment about Johnsons concerns. It seems as though the company went to great lengths to keep the data private, in part by condensing image data into a format thats legible to an ML model. Even so, making the toggle opt-in, like those for sharing analytics data or recordings or Siri interactions, rather than something users have to discover seems like it would have been a better option.
    0 Σχόλια 0 Μοιράστηκε 144 Views
  • WWW.MARKTECHPOST.COM
    This AI Paper Introduces XMODE: An Explainable Multi-Modal Data Exploration System Powered by LLMs for Enhanced Accuracy and Efficiency
    Researchers are focusing increasingly on creating systems that can handle multi-modal data exploration, which combines structured and unstructured data. This involves analyzing text, images, videos, and databases to answer complex queries. These capabilities are crucial in healthcare, where medical professionals interact with patient records, medical imaging, and textual reports. Similarly, multi-modal exploration helps interpret databases with metadata, textual critiques, and artwork images in art curation or research. Seamlessly combining these data types offers significant potential for decision-making and insights.One of the main challenges in this field is enabling users to query multi-modal data using natural language. Traditional systems struggle to interpret complex queries that involve multiple data formats, such as asking for trends in structured tables while analyzing related image content. Moreover, the absence of tools that provide clear explanations for query outcomes makes it difficult for users to trust and validate the results. These limitations create a gap between advanced data processing capabilities and real-world usability.Current solutions attempt to address these challenges using two main approaches. The first integrates multiple modalities into unified query languages, such as NeuralSQL, which embeds vision-language functions directly into SQL commands. The second uses agentic workflows that coordinate various tools for analyzing specific modalities, exemplified by CAESURA. While these approaches have advanced the field, they fall short in optimizing task execution, ensuring explainability, and addressing complex queries efficiently. These shortcomings highlight the need for a system capable of dynamic adaptation and clear reasoning.Researchers at Zurich University of Applied Sciences have introduced XMODE, a novel system designed to address these issues. XMODE enables explainable multi-modal data exploration using a Large Language Model (LLM)-based agentic framework. The system interprets user queries and decomposes them into subtasks like SQL generation and image analysis. By creating workflows represented as Directed Acyclic Graphs (DAGs), XMODE optimizes the sequence and execution of tasks. This approach improves efficiency and accuracy compared to state-of-the-art systems like CAESURA and NeuralSQL. Moreover, XMODE supports task re-planning, enabling it to adapt when specific components fail.The architecture of XMODE includes five key components: planning and expert model allocation, execution and self-debugging, decision-making, expert tools, and a shared data repository. When a query is received, the system constructs a detailed workflow of tasks, assigning them to appropriate tools like SQL generation modules and image analysis models. These tasks are executed in parallel wherever possible, reducing latency and computational costs. Further, XMODEs self-debugging capabilities allow it to identify and rectify errors in task execution, ensuring reliability. This adaptability is critical for handling complex workflows that involve diverse data modalities.XMODE demonstrated superior performance during testing on two datasets. On an artwork dataset, XMODE achieved 63.33% accuracy overall, compared to CAESURAs 33.33%. It excelled in handling tasks requiring complex outputs, such as plots and combined data structures, achieving 100% accuracy in generating plot-plot and plot-data structure outputs. Also, XMODEs ability to execute tasks in parallel reduced latency to 3,040 milliseconds, compared to CAESURAs 5,821 milliseconds. These results highlight its efficiency in processing natural language queries over multi-modal datasets.On the electronic health records (EHR) dataset, XMODE achieved 51% accuracy, outperforming NeuralSQL in multi-table queries, scoring 77.50% compared to NeuralSQLs 47.50%. The system demonstrated strong performance in handling binary queries, achieving 74% accuracy, significantly higher than NeuralSQLs 48% in the same category. XMODEs capability to adapt and re-plan tasks contributed to its robust performance, making it particularly effective in scenarios requiring detailed reasoning and cross-modal integration.XMODE effectively addresses the limitations of existing multi-modal data exploration systems by combining advanced planning, parallel task execution, and dynamic re-planning. Its innovative approach allows users to query complex datasets efficiently, ensuring transparency and explainability. With demonstrated accuracy, efficiency, and cost-effectiveness improvements, XMODE represents a significant advancement in the field, offering practical applications in areas such as healthcare and art curation.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Nikhil+ postsNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Σχόλια 0 Μοιράστηκε 149 Views
  • 9TO5MAC.COM
    Apple is redesigning the Magic Mouse: Heres what we know so far
    Mark Gurman from Bloomberg recently reported that Apple will be redesigning the Magic Mouse for the first time since its introduction in 2009. In the mouses fifteen-year history, the design has remained mostly the same, besides the switch from AA batteries to Lightning in 2015, and Lightning to USB-C in 2024. However, thatll soon be changing.Charge port placementOne of the biggest critiques of the Magic Mouse has been the charging port. You have to plug it in from the bottom to charge it, which also means you cant use your computer while your mouse needs to recharge.Some Magic Mouse defenders will say that this is better for your battery health, while others will say that youre realistically only recharging it every 6 weeks, so its a nonissue. They may have a concept of a point, but that doesnt change the fact that its annoying.Nonetheless, Gurman reports that this will be changing with the next-gen Magic Mouse:Apple is looking to createsomething thats more relevant, while also fixing longstanding complaints yes, including the charging port issue.So, sometime in the near future this will no longer be a concern.ErgonomicsWe dont have specifics on what the new Magic Mouse will actually look like, other than the fact that itll be completely rethought, hopefully with better ergonomics. Apple aims to make a mouse that fits in better, perhaps something similar to an MX Master:The good news is, theres a new Magic Mousein the works. Im told that Apples design team has been prototyping versions of the accessory in recent months, aiming to devise something that better fits the modern era.In a computing world now infused withtouch screens, voice commands and hand gestures, the mouse isnt as crucial as it once was.Many Mac users enjoy Logitechs MX Master mouse because of its ergonomics and additional features, though you do lose out on some of the multi-touch gestures that the Magic Mouse offers. Itd be cool to see Apple offer the best of both worlds.Release dateThe new Magic Mouse is in early development, and likely wont see a release until sometime in 2026. Gurman says he wouldnt expect anything in the next12 to 18 months. Apple is also working on a redesigned MacBook Pro with OLED in 2026 so well potentially see the two debut together.What would you like to see in a redesigned Magic Mouse? Let us know in the comments.Follow Michael:X/Twitter,Bluesky,InstagramAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Σχόλια 0 Μοιράστηκε 143 Views