• WWW.YANKODESIGN.COM
    Robot caddie concept helps you make smart decisions on the green
    There are probably two kinds of people in the world when it comes to robots: those who think that its more convenient to have them around and those who are scared that they might become our overlords someday, just like in those sci-fi movies. If youre somewhere in the former camp or at least in between the two, youre not surprised or afraid that we see them take on more tasks that actual humans previously did. Designer: Chang Do OhHuman caddies have been long-time companions of golfers everywhere. Rarely do you see a golfer who does everything for themselves as a caddie usually accompanies them and assists them as they play on the green. This concept called Goalf replaces that human caddie with a robot caddie even as golf courses are also getting the smartization treatment. Aside from the fact that it wont get tired, this robot is also optimized to help players achieve their golfing goals, hence the name goalf. Aside from following the golfer around and handing them their golf clubs, this caddie can also recommend which club to use in certain situations. It looks like those information robots going around malls and airports but instead of just providing you with facts, you actually get to utilize the smart features of your robo caddie to improve your game. The renders also show a screen which will probably give you information about your statistics like handicap, swing, etc. Golfers develop a pretty close relationship with their regular human caddies so this may be something that Goalf may not be able to do, unless it uses AI as well to converse with the player. But its an interesting concept to explore and will probably be more common in the future. Heres hoping they dont become our golfing overlords though. The post Robot caddie concept helps you make smart decisions on the green first appeared on Yanko Design.
    0 Comments 0 Shares 4 Views
  • 0 Comments 0 Shares 6 Views
  • 0 Comments 0 Shares 6 Views
  • WWW.CREATIVEBLOQ.COM
    The challenge was to create a cool character design: concept artist Marco Teixeira reveals the techniques and inspiration behind his superb 3D character render
    Marco Teixeira explores a personal piece that resonates with Brazilian culture and influences to create an appealing portrait
    0 Comments 0 Shares 6 Views
  • WWW.WIRED.COM
    The 17 Best EVs Coming in 2025
    After Jaguar's chaotic relaunch and the pomp and theater of Tesla's Cybercab reveal party, it's hard to imagine the coming year will top 2024until you see our picks for the best EVs of 2025.
    0 Comments 0 Shares 4 Views
  • WWW.WIRED.COM
    How to Start (and Keep) a Healthy Habit (2025)
    Whether you want to run a marathon or learn to play the guitar, heres how to set yourself up for success.
    0 Comments 0 Shares 5 Views
  • WWW.WIRED.COM
    Hey, Maybe It's Time to Delete Some Old Chat Histories
    Your messages going back years are likely still lurking online, potentially exposing sensitive information you forgot existed. But there's no time like the present to do some digital decluttering.
    0 Comments 0 Shares 5 Views
  • WWW.COMPUTERWORLD.COM
    44 awesome Android app discoveries from 2024
    Its the calm before the storm.Today, on New Years Day, we have a brief moment to pause and prepare and set ourselves up for success.From a tech perspective, that means taking the time to clean up and optimize your smartphone setup. That way, when the inevitable craziness hits, youll be ready to tackle whatever comes your way with smart, sensible systems and all the best apps already in place and ready to serve you.Weve already thought through the top Android tips and Google Android app tricks from 2024 and even the most noteworthy Pixel-specific advice from the past year. Today, its time to shift our focus and look at some of the most exceptional (and often off-the-beaten-path) third-party Android apps that can really expand your experience and grant you some exceptionally effective new productivity powers.Take a peek through the following standout suggestions 44 awesome apps to explore, spread out over a dozen different articles! and for even more Android Intelligence, make sure youre set to receive my free Android Intelligence newsletter, too. Youll get three new things to try in your inbox every Friday, and youll get my game-changing Android Notification Power Pack as a special welcome bonus.Here we go!2024s top Android app adviceMeet the floating Android note app of the futureThis dazzlingly different Android note app floats your most important info in an incredibly interesting way.The secret to summarizing notifications on AndroidWith about 60 seconds of simple setup, you can have Googles Gemini AI genie sum up your incoming notifications this instant no matter what Android device youre using.11 Android Quick Settings additions thatll supercharge your efficiencyThese out-of-sight tiles will turn your favorite phone into an even more powerful productivity powerhouse.3 secrets to a smarter Android status barA trio of quick n easy enhancements to transform the top of your screen into a time-saving power-hub.22 must-have Android widgets for busy professionalsThe most exceptional widgets around for making your favorite devices home screen much more useful.Android widgets gone wildWhy stop with the home screen? This wow-worthy widget wonder will make whatever Android device youre using infinitely more efficient in a way that only Android could provide.An awesome Android audio upgradeWhether youre dealing with mumblings from meetings, noises from notifications, or music from commute-time streaming, youve never experienced sound on your phone like this.The best Android app drawer enhancement youll ever makeFree your phones app drawer from its shackles and watch your efficiency soar.An instant Android motion sickness upgradeEver wish you could look down at your phone or tablet in a car without getting queasy? Heres your answer.60 seconds to a smarter Android screen timeoutThis quick n simple enhancement will make your day-to-day Android doings meaningfully more pleasant.This awesome Android weather app reads the forecast out loudA thoughtful interface, on-demand audio forecasts, and actual human meteorologists set this app apart.Bonus: 8 AI-powered apps thatll actually save you timeMost AI apps are buzzword-chasing hype-mongers. These eight off-the-beaten-path supertools while not entirely Android-specific are rare exceptions.A very happy New Year to you. Heres to many new geeky, Googley adventures ahead!Give yourself the gift of endless Android Intelligence in 2025 with my free weekly newsletter three new things to try in your inbox every Friday and six powerful new notification enhancements the second you sign up!
    0 Comments 0 Shares 6 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Vera C. Rubin Observatory is ready to transform our understanding of the cosmos
    High atop Chiles 2,700-meter Cerro Pachn, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. Its here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camerathe largest ever builtto produce a new map of the entire night sky every three days.Generating 20 terabytes of data per night, Rubin will capture fine details about the solar system, the Milky Way, and the large-scale structure of the cosmos, helping researchers to understand their history and current evolution. It will capture rapidly changing events, including stellar explosions called supernovas, the evisceration of stars by black holes, and the whiz of asteroids overhead. Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects in the universe are bound togetherand pushed apart. Rubin is the latest and most advanced entrant into the illustrious lineage of all-sky surveyorsinstruments that capture, or survey, the entire sky, over and over again. Its first scientific images are expected later this year. In a single exposure, Rubin will capture 100,000 galaxies, the majority invisible to other instruments. A quarter-century in the making, the observatory is poised to expand our understanding of just about every corner of the universe. The facility will also look far outside the Milky Way, cataloguing around 20 billion previously unknown galaxies and mapping their placement in long filamentary structures known as the cosmic web.I cant think of an astronomer who is not excited about [Rubin], says Christian Aganze, a galactic archeologist at Stanford University in California.The observatory was first proposed in 2001. Then called the Large-Aperture Synoptic Survey Telescope (LSST), it grew out of an earlier concept for an instrument that would study dark matter, the enigmatic substance making up 85% of the matter in the universe. LSST was later reenvisioned to focus on a broader set of scientific questions, cataloguing the night sky over the course of a decade. Five years ago, it was renamed in honor of the late American astronomer Vera Rubin, who uncovered some of the best evidence in favor of dark matters existence in the 1970s and 80s.During operations, Rubin will point its sharp eyes at the heavens and take a 30-second exposure of an area larger than 40 full moons. It will then swivel to a new patch and snap another photo, rounding back to the same swath of sky after about three nights. In this way, it can provide a constantly updated view of the universe, essentially creating this huge video of the southern sky for 10 years, explains Anais Mller, an astrophysicist at the Swinburne University of Technology in Melbourne, Australia.A view of the back of the Rubin Observatorys massive LSST camera, which boasts six filters designed to capture light from different parts of the electromagnetic spectrum.SPENCER LOWELL1) Secondary mirror (M2); 2) Lenses; 3) Primary Mirror (M1); 4) Tertiary mirror (M3)GREG STEWART/SLAC NATIONAL ACCELERATOR LABORATORY/NSF/DOE/RUBIN OBSERVATORY/AURATo accomplish its work, Rubin relies on an innovative three-mirror design unlike that of any other telescope. Its primary mirror is actually made up of two separate surfaces with different curvatures. The outer section, 8.4 meters wide, captures light from the universe and reflects it onto a 3.4-meter-wide secondary mirror located above it. This bounces the light back onto the inner part of the primary, which stretches five meters across and is considered a tertiary mirror, before being reflected into a digital camera. The compact configuration allows the enormous instrument to be powerful but nimble as it shifts around to take roughly 1,000 photos per night.It has five seconds to go to the next position and be ready, says Sandrine Thomas, the deputy director for the observatorys construction and project scientist for the telescope. Meaning that it doesnt move. It doesnt vibrate. Its just rock solid, ready to take the next image.Technicians reinstall a cover on the secondary telescope mirror, to protect it before installation.The observatorys three mirrors and the housing of the LSST camera are mounted on a structure called the Telescope Mount Assembly. The assembly has been carefully engineered for stability and precision, allowing the observatory to track celestial objects and carry out its large-scale survey of the sky.The primary and tertiary telescope mirrors are positioned below a chamber at the Rubin Observatory that is used to apply reflective coatings.A view of the Telescope Mount Assembly from above, through the observatorys protective dome shutter.Rubins 3,000-kilogram camera is the most sensitive ever created for an astronomical project. By stacking together images of a piece of sky taken over multiple nights, the telescope will be able to spot fainter and fainter objects, peering deeper into the cosmos the longer it operates.Each exposure creates a flood of data, which has to be piped via fiber-optic cables to processing centers around the world. These use machine learning to filter the information and generate alerts for interested groups, says Mller, who helps run what are known as community brokers, groups that design software to ingest the nightly terabytes of data and search for interesting phenomena. A small change in the skyof which Rubin is expected to see around 10 million per nightcould point to a supernova explosion, a pair of merging stars, or a massive object passing in front of another. Different teams will want to know which is which so they can aim other telescopes at particular regions for follow-up studies.The focal plane of the LSST has a surface area large enough to capture a portion of the sky about the size of 40 full Moons. Its resolution is so high that you could spot a golf ball from 24 km (15 miles) away.Matter in the universe can warp and magnify the light from more distant objects. The Rubin Observatory will use this phenomenon, called gravitational lensing, to study dark matter an as-yet-unidentified substance that makes up most of the universes matter.ESA, NASA, K. SHARON/TEL AVIV UNIVERSITY AND E. OFEK/CALTECHWith its capacity to detect faint objects, Rubin is expected to increase the number of known asteroids and comets by a factor of 10 to 100. Many of them will be objects more than 140 meters in diameter with orbits passing near Earths, meaning they could threaten our world. And it will catalogue 40,000 new small icy bodies in the Kuiper Belt, a largely unexplored region beyond Neptune where many comets are born, helping scientists better understand the structure and history of our solar system.We have never had such a big telescope imaging so wide and so deep.Anais Mller, astrophysicist, Swinburne University of Technology, Melbourne, AustraliaBeyond our solar system, Rubin will see telltale flickers that signal exoplanets passing in front of their parent stars, causing them to briefly dim. It should also find thousands of new brown dwarfs, faint objects between planets and stars in size, whose positions in the Milky Way can provide insight into how the environments in which stars are born affect the size and type of objects that can form there. It will discover never-before-seen dim dwarf galaxies orbiting our own and look closely at stellar streams, remnant trails of stars left behind when the Milky Way tore other, similar galaxies apart.The facility will also look far outside the Milky Way, cataloguing around 20 billion previously unknown galaxies and mapping their placement in long filamentary structures known as the cosmic web. The gravitational pull of dark matter directly affects the overall shape of this web, and by examining its structure, cosmologists will glean evidence for different theories of what dark matter is. Rubin is expected to observe millions of supernovas and determine their distance from us, a way of measuring how fast the universe is expanding. Some researchers suspect that dark energywhich is causing the cosmos to expand at an accelerated ratemay have been stronger in the past. Data from more distant, and therefore older, supernovas could help bolster or disprove such ideas and potentially narrow down the identity of dark energy too. An overhead view of the observatory.SPENCER LOWELLIn just about every way, Rubin will be a monumental project, explaining the near-universal eagerness for those in the field to see it finally begin operations.We have never had such a big telescope imaging so wide and so deep, says Mller. Thats an incredible opportunity to really pinpoint things that are changing in the sky and understand their physics. Adam Mann is a freelance space and physics journalist who lives in Oakland, California.
    0 Comments 0 Shares 8 Views
  • WWW.MARKTECHPOST.COM
    This AI Paper from NVIDIA and SUTD Singapore Introduces TANGOFLUX and CRPO: Efficient and High-Quality Text-to-Audio Generation with Flow Matching
    Text-to-audio generation has transformed how audio content is created, automating processes that traditionally required significant expertise and time. This technology enables the conversion of textual prompts into diverse and expressive audio, streamlining workflows in audio production and creative industries. Bridging textual input with realistic audio outputs has opened possibilities in applications like multimedia storytelling, music, and sound design.One of the significant challenges in text-to-audio systems is ensuring that generated audio aligns faithfully with textual prompts. Current models often fail to capture intricate details, leading to inconsistencies fully. Some outputs omit essential elements or introduce unintended audio artifacts. The lack of standardized methods for optimizing these systems further exacerbates the problem. Unlike language models, text-to-audio systems do not benefit from robust alignment strategies, such as reinforcement learning with human feedback, leaving much room for improvement.Previous approaches to text-to-audio generation relied heavily on diffusion-based models, such as AudioLDM and Stable Audio Open. While these models deliver decent quality, they come with limitations. Their reliance on extensive denoising steps makes them computationally expensive and time-intensive. Furthermore, many models are trained on proprietary datasets, which limits their accessibility and reproducibility. These constraints hinder their scalability and ability to handle diverse and complex prompts effectively.To address these challenges, researchers from the Singapore University of Technology and Design (SUTD) and NVIDIA introduced TANGOFLUX, an advanced text-to-audio generation model. This model is designed for efficiency and high-quality output, achieving significant improvements over previous methods. TANGOFLUX utilizes the CLAP-Ranked Preference Optimization (CRPO) framework to refine audio generation and ensure alignment with textual descriptions iteratively. Its compact architecture and innovative training strategies allow it to perform exceptionally well while requiring fewer parameters.TANGOFLUX integrates advanced methodologies to achieve state-of-the-art results. It employs a hybrid architecture combining Diffusion Transformer (DiT) and Multimodal Diffusion Transformer (MMDiT) blocks, enabling it to handle variable-duration audio generation. Unlike traditional diffusion-based models, which depend on multiple denoising steps, TANGOFLUX uses a flow-matching framework to create a direct and rectified path from noise to output. This rectified flow approach reduces the computational steps required for high-quality audio generation. During training, the system incorporates textual and duration conditioning to ensure precision in capturing input prompts nuances and the audio outputs desired length. The CLAP model evaluates the alignment between audio and textual prompts by generating preference pairs and optimizing them iteratively, a process inspired by alignment techniques used in language models.In terms of performance, TANGOFLUX outshines its predecessors across multiple metrics. It generates 30 seconds of audio in just 3.7 seconds using a single A40 GPU, demonstrating exceptional efficiency. The model achieves a CLAP score of 0.48 and an FD score of 75.1, both indicative of high-quality and text-aligned audio outputs. Compared to Stable Audio Open, which achieves a CLAP score of 0.29, TANGOFLUX significantly improves alignment accuracy. In multi-event scenarios, where prompts include multiple distinct events, TANGOFLUX excels, showcasing its ability to capture intricate details and temporal relationships effectively. The systems robustness is further highlighted by its ability to maintain performance even with reduced sampling steps, a feature that enhances its practicality in real-time applications.Human evaluations corroborate these results, with TANGOFLUX scoring the highest in subjective metrics such as overall quality and prompt relevance. Annotators consistently rated its outputs as clearer and more aligned than other models like AudioLDM and Tango 2. The researchers also emphasized the importance of the CRPO framework, which allowed for creating a preference dataset that outperformed alternatives such as BATON and Audio-Alpaca. The model avoided performance degradation typically associated with offline datasets by generating new synthetic data during each training iteration.The research successfully addresses critical limitations in text-to-audio systems by introducing TANGOFLUX, which combines efficiency with superior performance. Its innovative use of rectified flow and preference optimization sets a benchmark for future advancements in the field. This development enhances the quality and alignment of generated audio and demonstrates scalability, making it a practical solution for widespread adoption. The work of SUTD and NVIDIA represents a significant leap forward in text-to-audio technology, pushing the boundaries of what is achievable in this rapidly evolving domain.Check out the Paper, Code Repo, and Pre-Trained Model. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Nikhil+ postsNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Comments 0 Shares 4 Views