• The Copperleaf Home: A Modern Masterpiece of Wood, Stone, and Light
    www.home-designing.com
    The Copperleaf Home, our point of focus today, is a project by Taras Kaminskiy, Aleksandra Shokarova, and Alina Zherepa. Expect modern design, but warmer, softer, and more inviting. Here, wood, stone, and light do all the talking. These create a space that feels bold yet comforting. Every corner has a story, from the glowing staircase to the textured walls and statement furniture. Its a home that proves modern can still feel human.The first thing that catches your eye in the living room is the fireplace: that layered marble framed in warm metallic tones makes the whole room glow, almost like a piece of art in itself. Around it, the palette plays with contrast: deep greens, warm rusts, creamy neutrals, and sharp black accents.The seating arrangement keeps things comfortable but never boring. A plush sofa with earthy-toned pillows is paired with sculptural chairs, giving the room both elegance and personality. The rug ties it all together with its free-flowing lines, almost like a hand-drawn sketch grounding the space. And then, of course, the shelving: a geometric showpiece with its mix of textures and finishes.This space blends calm minimalism with just the right dose of personality. The creamy tufted sofa sets a soft, sculptural base. This accented with playful round cushions that break the straight lines. Across from it, two patterned armchairs add a bold punch, keeping the palette from feeling too quiet. The glass-topped tables with solid stone bases feel sleek but grounded. What really elevates the room is the backdrop: tall shelving that feels architectural yet airy, paired with subtle lighting that washes the walls in warmth.This dining space is sleek, moody, and refined. The dark stone table with sculptural legs makes a bold centerpiece, while slim black chairs keep the look minimal and sharp. Backlit shelving adds warmth and depth, balancing out the dramatic palette. Its the kind of setting that feels equally right for a quiet dinner.This kitchen leans into warmth and texture while keeping a modern edge. Rich wood cabinetry pairs beautifully with the stone backsplash and open shelving. The dark island anchors the room, while patterned bar stools bring in a playful touch.The staircase at Copperleaf Home turns a transition space into a design moment. Warm wood panels frame the steps, while soft underlighting creates a subtle glow with each rise. Natural light floods in through tall windows, keeping the space bright and airy. A bold piece of art adds personality, making the climb feel less like movement between floors and more like part of the homes story.The lounge upstairs feels moody, modern, and effortlessly cool. Textured stone walls and metal panels set a dramatic backdrop, while the sleek fireplace adds warmth. The furniture keeps the vibe relaxed: black leather chairs with a sculptural edge, paired with a deep rust sofa that grounds the room in earthy comfort.This bathroom is dramatic, sculptural, and anything but ordinary. The glossy deep-red tub instantly commands attention, sitting like a centerpiece on a marble base. Warm tones flow through the walls and translucent panels, giving the space a moody glow, while the sleek black fixtures add sharp contrast. The freestanding white marble sink is pure elegance!
    0 Commentaires ·0 Parts
  • Scanning the Moon
    blog.polyhaven.com
    Around two years ago, we were contacted by Antoine Richard, a researcher from the University of Luxembourg working on a paper relating to ML vision-based navigation of lunar rovers.As there is no GPS on the moon, rovers require other methods of spatial positioning if they are to be automatically maneuvered.Antoine was interested in a machine-vision approach, which required a set of synthetic training data created from somewhat accurate lunar regolith (moon dirt) textures. He contacted us to propose a collaboration partially sponsored by the University and Spaceport Rostock, where we would fly to their lab in Rostock, Germany, to scan the regolith simulant.You can find the latest on Antoines work on GitHub: Omniverse Lunar Robotics Simulator.Once we were able to cover our own travel costs thanks to support from our Patreon supporters, we had free rein for a week to scan as much as we could in their facility.The SimulationAntoine himself accompanied us to assist in the layout of the material to simulate lunar conditions as much as possible, along with Frank Koch and Lasse Hansen from Spaceport Rostock.This involved careful scattering of the dust with various techniques, using mostly gardening and kitchen tools to attempt to recreate the nature of the moons surface.Although the simulated regolith is made in such a way as to copy the substance on the moon as closely as possible by grinding specific types of rock into extremely fine powder the moon has no atmosphere, and so the dust behaves rather differently than it does here on earth. Because there is no friction to slow the dust down once it becomes airborne, for example after a meteor impact, the surface is riddled with tiny craters.The DangerLikewise due to the lack of atmosphere, the regolith has little to abrade the sharp edges of the dust particles away. This makes the fine powder extremely rough and easily scratches any sensitive camera equipment that it comes into contact with, even in the air.It also makes it carcinogenic, so breathing the dust is dangerous to your health.Dealing with this threat was one of the primary challenges of the trip, as we had to wait for the dust to settle (literally) after each arrangement, which took 1-2 hours, and then be careful not to disturb the surface while we scanned it.We were also required to wear N95 face masks while working near the material, and employed several air quality monitors which notified us when we were being too vigorous with the arrangement of the dust.The ProcessFor normal terrestrial scans, it is typical to walk back and forth across the surface you are scanning to capture a dense grid of images.These images are then fed into a photogrammetry software for mesh reconstruction.But as we could not physically walk across the regolith without disturbing dangerous clouds of dust, we constructed a single axis automated gantry which could suspend our camera and flash and automatically scan a row of images. Then from the sides, we could manually move the rig along the other axis.Each 74 meter scan took approximately 2-3 hours to capture this way.On the side, we also captured smaller 40cm macro scans of the regolith, intended to be used for smaller-scale renders or in combination with the larger scans to add detail.The DataEach scan consisted of approximately 1500 photos, totaling 1 million megapixels (or a terapixel if you like) and resulted in reconstructed geometry made of 1.7 billion polygons.A screenshot of one of the surface scans.By the end of the week, we had successfully scanned 20 textures.As part of our promise to Antoine, and the greater scientific community as a whole, we are also releasing all the raw data under the public domain (CC0) for any other researchers who wish to take advantage of it, or any photogrammetry artists who want to build their own geometry from the scans.The total dataset is ~800GB, but you can download each scan individually: moon_01 + moon_02 (79 GB) moon_03 + moon_04 (87 GB) moon_dusted_01 (34 GB) moon_dusted_02 (48 GB) moon_dusted_03 (41 GB) moon_dusted_04 (38 GB) moon_dusted_05 (34 GB) moon_flat_macro_01 (35 GB) moon_flat_macro_02 (8.1 GB) moon_footprints_01 + moon_footprints_02 (49 GB) moon_macro_01 (38 GB) moon_meteor_01 + moon_meteor_02 (91 GB) moon_rock_01 (5.1 GB) moon_rock_02 (4.4 GB) moon_rock_03 (4.9 GB) moon_rock_04 (4.8 GB) moon_rock_05 (4.6 GB) moon_rock_06 (4.6 GB) moon_rock_07 (4.5 GB) moon_tracks_01 + moon_tracks_02 (40 GB) moon_tracks_03 + moon_tracks_04 (37 GB) Please do let us know if you do anything interesting with this data, we and the Spaceport Rostock team would love to hear about it.If you have any trouble downloading the files, we also have them on a Nextcloud server that we can share with you. Just get in touch and well give you access freely.Additionally, we scanned 7 regular boring terrestrial rocks coated in the regolith to make them appear more like lunar rocks in order to help visual reconstruction of a simulated lunar environment.For fun, we captured an HDRI in the middle of the regolith pit a precarious adventure and lit it using their monstrous halogen bulb meant to simulate the high-contrast low-angle lighting often found on the moon. With the light reflected off the white ceiling you dont quite get the same effect, but it was interesting to capture nonetheless.The AssetsHere is the full collection of assets we created in this project: The DemoAs with all of Poly Havens other asset collections, we like to showcase what artists can do with our assets by creating a render.James took all of the scans that the team has put together over the last two years and built a beautiful simulation of a lunar environment in Blender.Two interesting challenges came with creating this demo.1.) The moon has no atmosphere, meaning you can practically see forever. With nothing to obscure the far reaches of the surface, keeping a consistent level of detail across the surface meant plenty of rocks would be needed. This is where the asset LODs come in handy. Though not typically used in Blender, they helped a lot to make the scene more manageable. 2.) The lighting is unlike anything else weve ever lit. With such intense contrast between what received light and what was hidden in shadow, the displacement details brought most of the surface information to life. It was great fun putting the scene together, especially all the little post-processing and camera details to match real footage from the lunar surface. The scene file for this render will be made available soon.
    0 Commentaires ·0 Parts
  • 0 Commentaires ·0 Parts
  • The 10 best Amazon deals to shop this week
    www.cnn.com
    The 10 best Amazon deals to shop this week
    0 Commentaires ·0 Parts
  • Hypershell Pro X Series Review: An Exoskeleton You Can Actually Buy
    www.wired.com
    This wearable power-up gives your legs a boost up hills, and unlike the competition, you can actually buy it, but were not totally sure you should.
    0 Commentaires ·0 Parts
  • Lina Khan Revamped Antitrust. Now Shes Pushing the Democratic Party.
    www.nytimes.com
    The youngest chair in the history of the Federal Trade Commission is campaigning for Zohran Mamdani and defending her brand of populism.
    0 Commentaires ·0 Parts
  • This refurbished MacBook Air is cheaper than most tablets right now
    www.macworld.com
    MacworldIf youre looking for a dependable, portable, andbudget-conscious laptop for day-to-day tasks, look no further. Thispre-loved MacBook Airhits the sweet spotand its only $189.97 (reg. $999) with free shipping for a little while longer.With a 1.8GHz Intel Core i5 processor, this machine is fast enough for tasks such as email, Google Docs, light streaming, and Zoom calls. Its not here to replace your gaming rig or your creative workstation, but for the average user who just needs something that works without the drama, this MacBook is a total win.At just under 3 pounds, its a dream for travel. Toss it in your tote or backpack and youre good to gowhether youre hopping on a plane or just working from the library. And thanks to the 12-hour battery, you can leave the charger at home more often than not.The128GB SSD keeps boot-up times snappy and gives you plenty of spacefor documents, photos, and those videos you swore youd organize one day. Plus,the 13.3-inch widescreen displaydelivers a crisp viewing experience, even if youre just catching up on YouTube.Sure, its not brand newthis is a Grade A or B refurbished device, meaning you might see a minor scuff here or there. But under the hood, its fully functional, professionally inspected, and even comes with a 90-day aftermarket parts and labor warranty, just in case.At just $189.97 with free shipping, thisdeal is hard to beat.Grab your own 13.3-inch MacBook Airnow before units sell out for good.Apple MacBook Air 13.3 (2017) 1.8GHz i5 8GB RAM 128GB SSD Silver (Refurbished)See DealStackSocialprices subject to change.
    0 Commentaires ·0 Parts
  • Microsofts Patch Tuesday updates: Keeping up with the latest fixes
    www.computerworld.com
    Long before Taco Tuesday became part of the pop-culture vernacular, Tuesdays were synonymous with security and for anyone in the tech world, they still are. Patch Tuesday, as you most likely know, refers to the day each month when Microsoft releases security updates and patches for its software products everything from Windows to Office to SQL Server, developer tools to browsers.The practice, which happens on the second Tuesday of the month, was initiated to streamline the patch distribution process and make it easier for users and IT system administrators to manage updates. Like tacos, Patch Tuesday is here to stay.In a blog post celebrating the 20th anniversary of Patch Tuesday, the Microsoft Security Response Center wrote: The concept of Patch Tuesday was conceived and implemented in 2003. Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner.Patch Tuesday will continue to be an important part of our strategy to keep users secure, Microsoft said, adding that its now an important part of the cybersecurity industry. As a case in point, Adobe, among others, follows a similar patch cadence.Patch Tuesday coverage has also long been a staple of Computerworlds commitment to provide critical information to the IT industry. Thats why weve gathered together this collection of recent patches, a rolling list well keep updated each month.In case you missed a recent Patch Tuesday announcement, here are the latest six months of updates.For September, Patch Tuesday means fixes for Windows, Office and SQL ServerMicrosoft released 86 patches this week with updates for Office, Windows, and SQL Server. But there were no zero-days, so theres no patch now recommendation from the Readiness team this month. This is an incredible sign of success for the Microsoft update group. To reinforce this fact, we have patches for Microsofts browser platform that have (perhaps for the first time) been rated at a much lower moderate security rating (as opposed to critical or important). More info on Microsoft Security updates for September 2025.For August, a complex Patch Tuesday with 111 updatesMicrosofts August Patch Tuesday release offers a rather complex set of updates, with 111 fixes affecting Windows, Office, SQL Server and Exchange Server and several Patch Now recommendations.Publicly disclosed vulnerabilities in Windows Kerberos (CVE-2025-53779) and Microsoft SQL Server (CVE-2025-49719) require immediate attention. In addition, a CISA directive about a severe Microsoft Exchange vulnerability (CVE-2025-53786) also requires immediate attentionfor government systems. And Office is on the Patch Now update calendar due to a preview pane vulnerability (CVE-2025-53740).More info on Microsoft Security updates for August 2025.For July, a big, broad Patch Tuesday releaseWith 133 patches in its Patch Tuesday update this month, Microsoft delivered a big, broad and important release that requires a Patch Now plan for Windows, Microsoft Office and SQL Server. A zero-day (CVE-2025-49719) in SQL Server requires urgent action, as do Git extensions to Microsoft Visual Studio.More info on Microsoft Security updates for July 2025.June Patch Tuesday: 68 fixes and two zero-day flawsMicrosoft offered up a fairly light Patch Tuesday release for June, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. But two zero-day vulnerabilities (CVE-2025-33073andCVE-2025-33053) mean IT admins need to get busy with quick patching plans. More info on Microsoft Security updates for June 2025.Mays Patch Tuesday serves up 78 updates, including 5 zero-day fixesThis May Patch Tuesday release is very much a back-to-basics update with just 78 patches for Microsoft Windows, Office, Visual Studio, and .NET. Notably, Microsoft has not released any patches for Microsoft Exchange Server or Microsoft SQL Server. However, five zero-day exploits for Windows mean this months Windows updates should be patched now. More info on Microsoft Security updates for May 2025.For April, a large dynamic Patch Tuesday releaseIT admins will be busy this month: the latest patch update from Microsoft includes 126 fixes, including one for an exploited Windows flaw and five critical patches for Office. The April Patch Tuesday release is large (126 patches), broad and unfortunately very dynamic, with several re-releases, missing files and broken patches affecting both the Windows and Office platforms.More info on Microsoft Security updates for April 2025.
    0 Commentaires ·0 Parts
  • GambleAware shares concern as gambling harm figures double
    readwrite.com
    GambleAware, the United Kingdoms leading charity on prevention and treatment for gambling harm, has shared concern that statistics have doubled across five years.The statistics come from the charitys Annual Treatment and Support Survey, which has also brought the notes of worry from the established gambling care regulator.GambleAware concerned of rising gambling harm figuresThe organization works with the National Health Service and ministers, with no input from the gambling industry, save donations made to keep the charity running. The UK Charity Commission governs it, the overseer of all registered charities in Britain.This morning, we published our annual Treatment and Support Survey by YouGov which examines the use of and demand for advice, support, and treatment among people who gamble and those affected by someone elses gambling in Great BritainTo learn more:https://t.co/HIVlWY3LHL pic.twitter.com/5LIGlqQjNW GambleAware (@gambleawaregb) September 11, 2025The Annual Treatment and Support Survey for 2025 is part of a routine reporting process that examines key metrics in the gambling sector regarding its impact on consumers and their wider social circle.YouGov, the international reportage and analytics technology group, has helped GambleAware publish this key survey for the past five years.Kate Gosschalk, YouGov Associate Director, said: We are pleased to share the findings from the latest annual Treatment and Support Survey, a substantial online survey of around 18,000 people in addition to interviews with those who gamble.Key topics and takeaways from the GambeAware surveyThe key topics covered include an annual appraisal of gambling advertising, different forms of gambling, and those who have recently started gambling, and the impact that this recreational activity has on their financial, emotional, and well-being situation.The UK National Lottery is also scrutinized, as is the perception of prize draws and charity lotteries, in addition to online and traditional physical gambling stores.Treatment is also a key focus of the yearly survey, with GambleAware taking special interest in the usage and demand of treatment and support advice for those who gamble or are impacted by a gambler close to them.The survey showed that 1 in 3 (30%) adults who gamble and are experiencing any risk of problems are seeking professional treatment, support, or advice, compared to the same metric of 1 in 5 (17%) in 2020.While it is encouraging that more people have sought help, this rise may also point to a growing public health crisis. We are increasingly alarmed by how gambling is being normalised and how frequently peopleespecially young peopleare exposed to gambling across Great Britain, said Zo Osmond OBE, CEO of GambleAware.As we reported, GambleAware has launched a new app called the GambleAware Support Tool to help gamblers, primarily young gamblers, find ways to cut down or stop.We shared our new GambleAware Support Tool with the Lived Experience Council to learn how it couldve helped them. Looking to reduce or quit gambling? Try it today: https://t.co/NsfxyR8rVK pic.twitter.com/rCRSUXFatE GambleAware (@gambleawaregb) June 18, 2025Alexia Clifford, GambleAwares Chief Communications Officer, said, Whether individuals want to reduce, manage or stay gamble-free, the GambleAware Support Tool is here every step of your journey.Personal impact shown by the reportThere has, says the survey, been a 2% jump in figures of those who are in the blast radius of someone with a gambling issue. The percentage equates to 4,3 million UK residents and the report showed that 2 million children could be in a home or personal situation with someone with a gambling issue.According to the survey, children are also exposed to more mixed media that promotes gambling, with the survey highlighting that those who took part urged for more restrictions on gambling advertising in formats popular with children. 91% of those canvased supported a ban on gambling advertising on TV and in video games, and 90% agreed with a ban on social media.The golden arches of McDonalds were also mentioned by the report with the popular McDonalds Monopoly being seen as an influence on those who could be at risk of harm from gambling.The figures presented that more than a quarter (27%) of gamblers are estimated to be experiencing risk of developing gambling problems from the influence of prize draws, and around 1 in 9 (11%) are experiencing a form of problem gambling.Osmond added that there is a requirement for urgent preventative action to reverse the trend in upward figures.The CEO said, This must include tougher regulation of gambling advertising to stop gambling being portrayed as harmless fun. There should also be mandatory health warnings on all gambling ads, stricter controls on digital and social media marketing , and a full ban on gambling promotion in stadiums and sports venues to protect children and young people from harm.Featured image: CanvaThe post GambleAware shares concern as gambling harm figures double appeared first on ReadWrite.
    0 Commentaires ·0 Parts
  • How do AI models generate videos?
    www.technologyreview.com
    MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand whats coming next. You can read more from the series here.Its been a big year for video generation. In the last nine months OpenAI made Sora public, Google DeepMind launched Veo 3, the video startup Runway launched Gen-4. All can produce video clips that are (almost) impossible to distinguish from actual filmed footage or CGI animation. This year also saw Netflix debut an AI visual effect in its show The Eternaut, the first time video generation has been used to make mass-market TV.Sure, the clips you see in demo reels are cherry-picked to showcase a companys models at the top of their game. But with the technology in the hands of more users than ever beforeSora and Veo 3 are available in the ChatGPT and Gemini apps for paying subscriberseven the most casual filmmaker can now knock out something remarkable.The downside is that creators are competing with AI slop, and social media feeds are filling upwith faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation.With AI-generated videos everywhere, lets take a moment to talk about the tech that makes them work.How do you generate a video?Lets assume youre a casual user. There are now a range of high-end tools that allow pro video makers to insert video generation models into their workflows. But most people will use this technology in an app or via a website. You know the drill: Hey, Gemini, make me a video of a unicorn eating spaghetti. Now make its horn take off like a rocket. What you get back will be hit or miss, and youll typically need to ask the model to take another pass or 10 before you get more or less what you wanted.So whats going on under the hood? Why is it hit or missand why does it take so much energy? The latest wave of video generation models are whats known as latent diffusion transformers. Yes, thats quite a mouthful. Lets unpack each part in turn, starting with diffusion.Whats a diffusion model?Imagine taking an image and adding a random spattering of pixels to it. Take that pixel-spattered image and spatter it again and then again. Do that enough times and you will have turned the initial image into a random mess of pixels, like static on an old TV set.A diffusion model is a neural network trained to reverse that process, turning random static into images. During training, it gets shown millions of images in various stages of pixelation. It learns how those images change each time new pixels are thrown at them and, thus, how to undo those changes.The upshot is that when you ask a diffusion model to generate an image, it will start off with a random mess of pixels and step by step turn that mess into an image that is more or less similar to images in its training set.But you dont want any imageyou want the image you specified, typically with a text prompt. And so the diffusion model is paired with a second modelsuch as a large language model (LLM) trained to match images with text descriptionsthat guides each step of the cleanup process, pushing the diffusion model toward images that the large language model considers a good match to the prompt.An aside: This LLM isnt pulling the links between text and images out of thin air. Most text-to-image and text-to-video models today are trained on large data sets that contain billions of pairings of text and images or text and video scraped from the internet (a practice many creators are very unhappy about). This means that what you get from such models is a distillation of the world as its represented online, distorted by prejudice (and pornography).Its easiest to imagine diffusion models working with images. But the technique can be used with many kinds of data, including audio and video. To generate movie clips, a diffusion model must clean up sequences of imagesthe consecutive frames of a videoinstead of just one image.Whats a latent diffusion model?All this takes a huge amount of compute (read: energy). Thats why most diffusion models used for video generation use a technique called latent diffusion. Instead of processing raw datathe millions of pixels in each video framethe model works in whats known as a latent space, in which the video frames (and text prompt) are compressed into a mathematical code that captures just the essential features of the data and throws out the rest.A similar thing happens whenever you stream a video over the internet: A video is sent from a server to your screen in a compressed format to make it get to you faster, and when it arrives, your computer or TV will convert it back into a watchable video.And so the final step is to decompress what the latent diffusion process has come up with. Once the compressed frames of random static have been turned into the compressed frames of a video that the LLM guide considers a good match for the users prompt, the compressed video gets converted into something you can watch.With latent diffusion, the diffusion process works more or less the way it would for an image. The difference is that the pixelated video frames are now mathematical encodings of those frames rather than the frames themselves. This makes latent diffusion far more efficient than a typical diffusion model. (Even so, video generation still uses more energy than image or text generation. Theres just an eye-popping amount of computation involved.)Whats a latent diffusion transformer?Still with me? Theres one more piece to the puzzleand thats how to make sure the diffusion process produces a sequence of frames that are consistent, maintaining objects and lighting and so on from one frame to the next. OpenAI did this with Sora by combining its diffusion model with another kind of model called a transformer. This has now become standard in generative video.Transformers are great at processing long sequences of data, like words. That has made them the special sauce inside large language models such as OpenAIs GPT-5 and Google DeepMinds Gemini, which can generate long sequences of words that make sense, maintaining consistency across many dozens of sentences.But videos are not made of words. Instead, videos get cut into chunks that can be treated as if they were. The approach that OpenAI came up with was to dice videos up across both space and time. Its like if you were to have a stack of all the video frames and you cut little cubes from it, says Tim Brooks, a lead researcher on Sora.A selection of videos generated with Veo 3 and Midjourney. The clips have been enhanced in postproduction with Topaz, an AI video-editing tool. Credit: VaigueManUsing transformers alongside diffusion models brings several advantages. Because they are designed to process sequences of data, transformers also help the diffusion model maintain consistency across frames as it generates them. This makes it possible to produce videos in which objects dont pop in and out of existence, for example.And because the videos are diced up, their size and orientation do not matter. This means that the latest wave of video generation models can be trained on a wide range of example videos, from short vertical clips shot with a phone to wide-screen cinematic films. The greater variety of training data has made video generation far better than it was just two years ago. It also means that video generation models can now be asked to produce videos in a variety of formats.What about the audio?A big advance with Veo 3 is that it generates video with audio, from lip-synched dialogue to sound effects to background noise. Thats a first for video generation models. As Google DeepMind CEO Demis Hassabis put it at this years Google I/O: Were emerging from the silent era of video generation.The challenge was to find a way to line up video and audio data so that the diffusion process would work on both at the same time. Google DeepMinds breakthrough was a new way to compress audio and video into a single piece of data inside the diffusion model. When Veo 3 generates a video, its diffusion model produces audio and video together in a lockstep process, ensuring that the sound and images are synched.You said that diffusion models can generate different kinds of data. Is this how LLMs work too?Noor at least not yet. Diffusion models are most often used to generate images, video, and audio. Large language modelswhich generate text (including computer code)are built using transformers. But the lines are blurring. Weve seen how transformers are now being combined with diffusion models to generate videos. And this summer Google DeepMind revealed that it was building an experimental large language model that used a diffusion model instead of a transformer to generate text.Heres where things start to get confusing: Though video generation (which uses diffusion models) consumes a lot of energy, diffusion models themselves are in fact more efficient than transformers. Thus, by using a diffusion model instead of a transformer to generate text, Google DeepMinds new LLM could be a lot more efficient than existing LLMs. Expect to see more from diffusion models in the near future!
    0 Commentaires ·0 Parts
CGShares https://cgshares.com