0 Commentarios
0 Acciones
2 Views
Directorio
Directorio
-
Please log in to like, share and comment!
-
WWW.YOUTUBE.COMIs this the BEST Monitor for Programmers? #RD320U #BenQ #ProgrammingmonitorIs this the BEST Monitor for Programmers? #RD320U #BenQ #Programmingmonitor0 Commentarios 0 Acciones 2 Views
-
WWW.YOUTUBE.COMH20.5 Foundations | Welcome | IntroductionH20.5 Foundations | Welcome | Introduction0 Commentarios 0 Acciones 2 Views
-
WWW.YOUTUBE.COMH20.5 Foundations | Welcome 5 | Texture the GroundH20.5 Foundations | Welcome 5 | Texture the Ground0 Commentarios 0 Acciones 2 Views
-
WWW.TECHNOLOGYREVIEW.COMThese AI Minecraft characters did weirdly human stuff all on their ownLeft to their own devices, an army of AI characters didnt just survive they thrived. They developed in-game jobs, shared memes, voted on tax reforms and even spread a religion. The experiment played out on the open-world gaming platform Minecraft, where up to 1000 software agents at a time used large language models (LLMs) to interact with one another. Given just a nudge through text prompting, they developed a remarkable range of personality traits, preferences and specialist roles, with no further inputs from their human creators. The work, from AI startup Altera, is part of a broader field that wants to use simulated agents to model how human groups would react to new economic policies or other interventions. But for Alteras founder, Robert Yang, who quit his position as an assistant professor in computational neuroscience at MIT to start the company, this demo is just the beginning. He sees it as an early step towards large-scale AI civilizations that can coexist and work alongside us in digital spaces. The true power of AI will be unlocked when we have actually truly autonomous agents that can collaborate at scale, says Yang. Yang was inspired by Stanford University researcher Joon Sung Park who, in 2023, found that surprisingly humanlike behaviors arose when a group of 25 autonomous AI agents was let loose to interact in a basic digital world. Once his paper was out, we started to work on it the next week, says Yang. I quit MIT six months after that. Yang wanted to take the idea to its extreme. We wanted to push the limit of what agents can do in groups autonomously. ALTERA Altera quickly raised more than $11m in funding from investors including A16Z and the former Google CEO Eric Schmidts emerging tech VC firm. Earlier this year Altera released its first demo: an AI-controlled character in Minecraft that plays alongside you. Alteras new experiment, Project Sid, uses simulated AI agents equipped with brains made up of multiple modules. Some modules are powered by LLMs and designed to specialize in certain tasks, such as reacting to other agents, speaking, or planning the agents next move. The team started small, testing groups of around 50 agents in Minecraft to observe their interactions. Over 12 in-game days (4 real-world hours) the agents began to exhibit some interesting emergent behavior. For example, some became very sociable and made many connections with other characters, while others appeared more introverted. The likability rating of each agent (measured by the agents themselves) changed over time as the interactions continued. The agents were able to track these social cues and react to them: in one case an AI chef tasked with distributing food to the hungry gave more to those who he felt valued him most. More humanlike behaviors emerged in a series of 30-agent simulations. Despite all the agents starting with the same personality and same overall goalto create an efficient village and protect the community against attacks from other in-game creaturesthey spontaneously developed specialized roles within the community, without any prompting. They diversified into roles such as builder, defender, trader, and explorer. Once an agent had started to specialize, its in-game actions began to reflect its new role. For example, an artist spent more time picking flowers, farmers gathered seeds and guards built more fences. We were surprised to see that if you put [in] the right kind of brain, they can have really emergent behavior, says Yang. That's what we expect humans to have, but don't expect machines to have. Yangs team also tested whether agents could follow community-wide rules. They introduced a world with basic tax laws and allowed agents to vote for changes to the in-game taxation system. Agents prompted to be pro or anti tax were able to influence the behavior of other agents around them, enough that they would then vote to reduce or raise tax depending on who they had interacted with. The team scaled up, pushing the number of agents in each simulation to the maximum the Minecraft server could handle without glitching, up to 1000 at once in some cases. In one of Alteras 500-agent simulations, they watched how the agents spontaneously came up with and then spread cultural memes (such as a fondness for pranking, or an interest in eco-related issues) among their fellow agents. The team also seeded a small group of agents to try to spread the (parody) religion, Pastafarianism, around different towns and rural areas that made up the in-game world, and watched as these Pastafarian priests converted many of the agents they interacted with. The converts went on to spread Pastafarianism (the word of the Church of the Flying Spaghetti Monster) to nearby towns in the game world. The way the agents acted might seem eerily lifelike, but really all they are doing is regurgitating patterns the LLMshave learned from being trained on human-created data on the internet. The takeaway is that LLMs have a sophisticated enough model of human social dynamics [to] mirror these human behaviors, says Altera co-founder Andrew Ahn. ALTERA In other words, the data makes them excellent mimics of human behavior, but they are in no way alive. But Yang has grander plans. Altera plans to expand into Roblox next, but Yang hopes to eventually move beyond game worlds altogether. Ultimately, his goal is a world in which humans dont just play alongside AI characters, but also interact with them in their day-to-day lives. His dream is to create a vast number of digital humans who actually care for us and will work with us to help us solve problems, as well as keep us entertained. We want to build agents that can really love humans (like dogs love humans, for example), he says. This viewpointthat AI could love usis pretty controversial in the field, with many experts arguing it's not possible to recreate emotions in machines using current techniques. AI veteran Julian Togelius, for example, who runs games testing company Modl.ai, says he likes Alteras work, particularly because it lets us study human behavior in simulation. But could these simulated agents ever learn to care for us, love us, or become self-aware? Togelius doesnt think so. There is no reason to believe a neural network running on a GPU somewhere experiences anything at all, he says. But maybe AI doesnt have to love us for real to be useful. If the question is whether one of these simulated beings could appear to care, and do it so expertly that it would have the same value to someone as being cared for by a human, that is perhaps not impossible, Togelius adds. You could create a good-enough simulation of care to be useful. The question is whether the person being cared for would care that the carer has no experiences. In other words, so long as our AI characters appear to care for us in a convincing way, that might be all we really care about.0 Commentarios 0 Acciones 2 Views
-
WWW.TECHNOLOGYREVIEW.COMThe way we measure progress in AI is terribleEvery time a new AI model is released, its typically touted as acing its performance against a series of benchmarks. OpenAIs GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI companys latest model in several tests. The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models scores against these benchmarks will determine the level of scrutiny and regulation they receive. It seems to be like the Wild West because we dont really have good evaluation standards, says Anka Reuel, an author of the paper, who is a PhD student in computer science at Stanford University and a member of its Center for AI Safety. A benchmark is essentially a test that an AI takes. It can be in a multiple-choice format like the most popular one, the Massive Multitask Language Understanding benchmark, known as the MMLU, or it could be an evaluation of AIs ability to do a specific task or the quality of its text responses to a set series of questions. AI companies frequently cite benchmarks as testament to a new models success. The developers of these models tend to optimize for the specific benchmarks, says Anna Ivanova, professor of psychology at the Georgia Institute of Technology and head of its Language, Intelligence, and Thought (LIT) lab, who was not involved in the Stanford research. These benchmarks already form part of some governments plans for regulating AI. For example, the EU AI Act, which goes into force in August 2025, references benchmarks as a tool to determine whether or not a model demonstrates systemic risk; if it does, it will be subject to higher levels of scrutiny and regulation. The UK AI Safety Institute references benchmarks in Inspect, which is its framework for evaluating the safety of large language models. But right now, they might not be good enough to use that way. Theres this potential false sense of safety were creating with benchmarks if they arent well designed, especially for high-stakes use cases, says Reuel. It may look as if the model is safe, but it is not. Given the increasing importance of benchmarks, Reuel and her colleagues wanted to look at the most popular examples to figure out what makes a good oneand whether the ones we use are robust enough. The researchers first set out to verify the benchmark results that developers put out, but often they couldnt reproduce them. To test a benchmark, you typically need some instructions or code to run it on a model. Many benchmark creators didnt make the code to run their benchmark publicly available. In other cases, the code was outdated. Benchmark creators often dont make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate. Another issue is that benchmarks are frequently saturated, which means all the problems have been pretty much been solved. For example, lets say theres a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model. One of the goals of the research was to define a list of criteria that make a good benchmark. Its definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them, says Ivanova. The issue is that there isnt one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. Thats very useful. The paper was accompanied by the launch of a website, BetterBench, that ranks the most popular AI benchmarks. Rating factors include whether or not experts were consulted on the design, whether the tested capability is well defined, and other basicsfor example, is there a feedback channel for the benchmark, or has it been peer-reviewed? The MMLU benchmark had the lowest ratings. I disagree with these rankings. In fact, Im an author of some of the papers ranked highly, and would say that the lower ranked benchmarks are better than them, says Dan Hendrycks, director of CAIS, the Center for AI Safety, and one of the creators of the MMLU benchmark. Some think the criteria may be missing the bigger picture. The paper adds something valuable. Implementation criteria and documentation criteriaall of this is important. It makes the benchmarks better, says Marius Hobbhahn, CEO of Apollo Research, a research organization specializing in AI evaluations. But for me, the most important question is, do you measure the right thing? You could check all of these boxes, but you could still have a terrible benchmark because it just doesnt measure the right thing. Essentially, even if a benchmark is perfectly designed, one that tests the models ability to provide compelling analysis of Shakespeare sonnets may be useless if someone is really concerned about AIs hacking capabilities. Youll see a benchmark thats supposed to measure moral reasoning. But what that means isnt necessarily defined very well. Are people who are experts in that domain being incorporated in the process? Often that isnt the case, says Amelia Hardy, another author of the paper and an AI researcher at Stanford University. There are organizations actively trying to improve the situation. For example, a new benchmark from Epoch AI, a research organization, was designed with input from 60 mathematicians and verified as challenging by two winners of the Fields Medal, which is the most prestigious award in mathematics. The participation of these experts fulfills one of the criteria in the BetterBench assessment. The current most advanced models are able to answer less than 2% of the questions on the benchmark, which means theres a significant way to go before it is saturated. We really tried to represent the full breadth and depth of modern math research, says Tamay Besiroglu, associate director at Epoch AI. Despite the difficulty of the test, Besiroglu speculates it will take only around four years for AI models to saturate the benchmark, scoring higher than 80%. And Hendrycks' organization, CAIS, is collaborating with Scale AI to create a new benchmark that he claims will test AI models against the frontier of human knowledge, dubbed Humanitys Last Exam, HLE. HLE was developed by a global team of academics and subject-matter experts, says Hendrycks. HLE contains unambiguous, non-searchable, questions that require a PhD-level understanding to solve. If you want to contribute a question, you can here. Although there is a lot of disagreement over what exactly should be measured, many researchers agree that more robust benchmarks are needed, especially since they set a direction for companies and are a critical tool for governments. Benchmarks need to be really good, Hardy says. We need to have an understanding of what really good means, which we dont right now.0 Commentarios 0 Acciones 2 Views
-
WWW.BDONLINE.CO.UKOle Scheeren reveals designs for another twin tower scheme in ChinaOle Scheeren's designs for the Urban Glen scheme in Hangzhou1/5show captionOle Scheeren has unveiled plans for a twin tower mixed-use scheme next to a Unesco world heritage site in Hangzhou, China.The Beijing-based architects designs for the Urban Glen scheme were drawn up for Hong Kong developer New World Development.Currently under construction, it consists of two main blocks, one containing a luxury Rosewood Hotel and the other housing 500,000sq ft of office space.The development is positioned between Hangzhous Unesco-listed West Lake, a natural freshwater lake surrounded by mountains and temples, and the Qiantang River.It is a key element of the Wangjiang New Town project, an urban initiative aiming to establish an art and cultural destination within the historic east China city.Ole Scheeren said his designs for the projects greenery-covered terraces were inspired by Hangzhous hilly landscapes.Instead of creating a hermetic singular volume, Urban Glen opens a highly interactive space in the middle of the city block a space that unites living and working with nature, culture, and leisure, the German-born architect said.Scheerens other recent projects in China include a twin-tower office scheme in Shenzhen for JD.com, one of the countrys largest online retailers.The architect is also behind a four-tower headquarters in Shenzhen for Tencent, Chinas biggest company.Scheeren beat a host of start names last year to win the job including his former practice OMA, Foster & Partners, Heatherwick Studio, Zaha Hadid Architects and Herzog & de Meuron.0 Commentarios 0 Acciones 3 Views
-
WWW.BDONLINE.CO.UKRayner puts Stiff & Trevillions plans for 43-storey City tower on iceLogin or SUBSCRIBE to view this storyExisting subscriber? LOGINA subscription to Building Design will provide:Unlimited architecture news from around the UKReviews of the latest buildings from all corners of the worldFull access to all our online archivesPLUS you will receive a digital copy of WA100 worth over 45.Subscribe now for unlimited access.Subscribe todayAlternatively REGISTER for free access on selected stories and sign up for email alerts0 Commentarios 0 Acciones 3 Views
-
WWW.ILM.COMThe Day the Aliens InvadedIndustrial Light & Magic brings creative commotion to the creatures and New York cityscape of A Quiet Place: Day One.By Clayton SandellAlex Wolff as Reuben in A Quiet Place: Day One (Credit: Paramount Pictures).Surviving the extra-terrestrial terror of A Quiet Place: Day One (2024) depends on the critical ability to stay absolutely silent.Setting the third installment of the acclaimed film series in noisy New York City, however, brought an entirely new level of fear to the post-apocalyptic horror world first introduced to audiences in John Krasinskis A Quiet Place (2018), while simultaneously presenting a welcome challenge for the visual effects team at Industrial Light & Magic.ILM visual effects supervisor Malcolm Humphreys says early discussions with director Michael Sarnoski focused on how to bring unique and unexpected aspects to the frightening alien invaders that use a preternatural sense of hearing to stalk their human prey.Among the thousands of New Yorkers running for their lives is Sam, a terminally-ill cancer patient played by Academy Award-winner Lupita Nyongo. Trying to escape the city as the monsters close in, Sam and her cat Frodo eventually encounter Eric, an English law student portrayed by Joseph Quinn. Sam is determined to get a slice of her favorite pizza before she dies.He wanted to make a narrative about how two different people deal with this situation in a big city, Humphreys says of Sarnoski. So this was an interesting take about trying to make something about two strangers that meet while all this chaos is happening.Concept art by Szabolcs Menyhei (Credit ILM & Paramount).A visual effects veteran of films including Ant-Man and the Wasp: Quantumania (2023), The Batman (2022), and Star Wars: The Rise of Skywalker (2019), Humphreys and his team helped guide Sarnoski and cinematographer Pat Scola through the complex process of making a film that required a large number of visual effects sequences. Sarnoski and Scola previously collaborated on the award-winning film Pig (2021) starring Nicolas Cage.Part of the job at ILM is just understanding the story and where we want to go, and just trying to build a bespoke solution depending on the different types of shots were doing, Humphreys tells ILM.com.One challenge, Humphreys says, was determining how the creatures with hypersensitive hearing might move and behave in a city environment like New York.In the previous films, theyre either just stealthing on a single character or theyre sort of doing a snatch-and-grab, explains Humphreys. So Michael was very keen on expanding that a little bit more. For example, how do they act with each other?During a nighttime sequence set at a construction site, the creatures behave almost like a family gathering for dinner, ripping apart and devouring a fungus-encrusted pod for food. Behind the scenes, the ILM team came to refer to the monsters by the name Happy.Theyre not very happy creatures, so calling them Happy is kind of fun, Humphreys says. Theres a really big mom thats all caked in white, and then youve got the little baby happies. The little ones have slightly bigger heads. Theyre smoother.(Credit: Paramount)When Eric accidentally makes a noise, a nearby creature is alerted and exposes its slimy, pulsating inner ear to listen more closely. Its a tense, relatively long shot that Humphreys says is also one of the films most complex.Theres an immense amount of detail that the modelers, the texture artists, and the effects artists have done, he says. Theres the eardrum thats fluctuating. Youre actually hearing Erics heartbeat, and were pulsing the eardrum and the heartbeat together.You want to get an emotional reaction from the audience, so we want to sit on this shot for quite a while, Humphreys continues. I really, really love this shot.Humphreys credits animation supervisor Michael Lum with helping develop the right movement for the creatures as they do things audiences have never seen before, like scrambling up and over Manhattan buildings.All of the creatures are hand-animated, Humphreys reveals. Theres no crowd system or anything like that. Theyre all handcrafted, which is amazing.Building out New York City was another major aspect of ILMs work on A Quiet Place: Day One that may not be apparent to many audiences, and thats exactly the goal.The areas of New York that appear in the film including the Lower East Side, Chinatown, Midtown, and Harlemwere realized as a massive partial backlot set built at Warner Bros. Studios Leavesden near London. Production designer Simon Bowles and his team built two intersecting streets that could be modified and dressed into new locations as Sam, Eric, and Frodo make their way through the city.Most of the backlot structures, however, were only built two stories tall, requiring ILM artists to digitally extend the height of buildings, lengthen streets, and fill in backgrounds.Lupita Nyongo as Samira in A Quiet Place: Day One (Credit: Paramount).We did an immense amount of data capture, Humphreys explains, a process that required 14 days in New York so the team could scan and photograph more than a hundred real buildings in high resolution. We go through a whole process of building out those facades so that they can be used on many, many shots.For certain bits, weve changed quite significantly what you see in the backlot set, Humphreys reports. Theres a huge amount of augmentation and replacement.While Frodo the cat is entirely practical (played by two different feline stars, Schnitzel and Nico), a scene requiring the animal to weave through a frantic crowd running from the aliens required extensive digital artistry from ILM.Michael was adamant that he wanted to use the real cats, Humphreys recalls. There was a little bit of, how are we going to do a shot like this? We cant have a whole lot of people trampling over a cat.The solution was to photograph just the cats performance separately at first, then add people and additional elements later.That shot is actually an amalgamation of hundreds of layers of different crowd people, and really timing and trying to build that shot up so that as an audience member, you get the sense of the chaos, but you also see Frodo enough for him to register, Humphreys adds.The films finale has Sam, Eric and Frodo desperately trying to reach a boat on the East River filled with survivors making their escape from New York. The sequence is built from several different locations, including part of an airfield dressed as a deserted FDR Drive, a pier along the Thames river, a moored boat, and a water tank at Pinewood Studios.(Credit: Paramount)It was a lot of fun, but a lot of moving pieces, Humphreys laughs. Were sort of shooting component pieces and hoping that they all go together.Humphreys says his favorite visual effect is the very last scene in the film. As Sam walks down a Harlem street listening to music, the camera sweeps 360 degrees around her in a single shot lasting nearly 40 seconds. Originally shot on the backlot, Humphreys notes the sequence required complex rotoscoping and compositing, with artists ultimately replacing as much as 70 percent of the original background with images created using the data ILM gathered in New York.We actually captured three or four blocks of Lexington Avenue, so theres a huge amount of data capture for that one shot, Humphreys says. Im really proud of that one.Humphreys joined ILM in 2016 and is based at the companys London studio. But he says the work on A Quiet Place: Day One was a truly global effort.I got to work with a lovely team in Vancouver, in London, Mumbai, and San Francisco, he says. I think were just good creative partners.The one thing you get out of ILM, Humphreys believes, is that it still operates very much like a smaller company in terms of communication and collaboration, which is really refreshing.Concept art by Daniel McGarry (Credit: ILM & Paramount).Clayton Sandell is a television news correspondent, a Star Wars author and longtime fan of the creative people who keep Industrial Light & Magic and Skywalker Sound on the leading edge of visual effects and sound design.0 Commentarios 0 Acciones 2 Views
-
WWW.CNET.COMBest Black Friday Deals Live Right Now: 70-Plus Deals on Laptops, TVs, Home Goods and Much MoreOur Experts Written by Russell Holly Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement Why You Can Trust CNET 16171819202122232425+ Years of Experience 14151617181920212223 Hands-on Product Reviewers 6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000 Sq. Feet of Lab Space CNETs expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.Table of Contents Table of Contents Every company offers Black Friday deals nowadays, and they all say theirs is the best; this makes sorting through the noise more challenging every year. But if you know where to look, you can find amazing deals on TVs, laptops, gaming accessoriesand just about every pair of headphones in existence.If you're still having a tough time separating the good deals from the okay ones, don't worry, CNET's shopping experts work nonstop from the moment Black Friday deals start to the moment Cyber Monday deals end to make sure this page has all of the best offers we can find. Check back regularly: There's always a new Black Friday discount here.See at AmazonBest Black Friday deals Anker Prime 67W USB-C charger: $36 At 40% off, this 67-watt charger with USB-C and USB-A ports is a steal. It can help boost your phone, laptop and more at home and when you're on the go. Details $36 at Amazon Apple iPad (10th gen): $250 Apple's latest entry-level iPad is ouroverall favorite tablet of 2024, and now you can grab it at an all-time low price. It has a 10.9-inch display, a USB-C connector and Wi-Fi 6 support. Be sure to clip theon-page couponfor the full discount. Details $250 at Amazon Samsung Galaxy Watch 7: $203 Get the best Android smartwatch experience at its lowest price yet. Even if last year's Galaxy Watch 6 is down to $140, the improved health sensor array, the smoother yet more efficient processor and the new gesture controls on the Galaxy Watch 7 can make all the difference in the world in everyday use, especially for those with extra-small or irregularly shaped wrists. Samsung's Galaxy Health suite remains entirely free -- unlike Fitbit on the Pixel Watch 3 -- and its integration with both Samsung Galaxy phones and non-Samsung Android phones is top tier. Samsung's customizable watch faces, like the new Ultra Info Board or the updated (and GIF-supporting) Photos face, let your watch feel as futuristic, retro or personal as you desire. Details $203 at Amazon Ultimate Ears Miniroll: $50 UE's smallest, newest Bluetooth speaker gets its first big discount. While the Wonderboom usually wins Black Friday, the new Miniroll has stolen our hearts with its portable build and easy-to-mount backstrap. It can party all day (or night) with its 12-hour battery life, and UE finally used USB-C to charge the Miniroll, so no more digging out a micro-USB cable for this micro speaker. Details $50 at Amazon Chefman indoor pizza over: $200 Add this indoor pizza oven to your kitchen arsenal and make personal 12-inch pizzas in just minutes. It has five presets, including one for pan pizza, so you can customize your pie exactly how you like it. This model also includes a pizza peel. Details $200 at Amazon Amazon Echo Show 8 (3rd gen) bundle: $93 Amazon's Echo Show 8 is an impressive smart display with an 8-inch HD touchscreen, a 13MP camera and a built-in microphone to let you check the weather, set timers, update your calendar and more, hands-free. This bundle also includes a freeEnergetic smart bulb. Details $93 at Amazon Amazon / CNET The Vitamix Explorian is the best blender in the game, whether for making smoothies or blending soups. With 1,500 watts of power and a 48-ounce canister, this blender is a bargain at more than $100 off. Show more $200 at Amazon Amazon Fire TV Stick 4K Max: $33 (save $27). Give the gift of 4K streaming with theFire TV Stick. It's the easiest way to add smart features to a non-smart TV, especially if you're on a budget.Aqara Smart Lock U100 smart door lock: $130 (save $100). This smart lock includes support for Apple's Home Key technology, Amazon's Alexa and more.Tineco Pure One S11 cordless vacuum cleaner: $180 (save $120). This lightweight Tineco was chosen as our pick for the overallbest cordless vacuumof the year thanks to its strong suction, HEPA filtration and more. Clip theon-page couponfor the full discount.Blink Video Doorbell: $30 (save $30). One of thebest video doorbells of 2024, this Blink model is ultra-affordable and can help you monitor things at home with its HD video, infrared night vision, two-way audio and motion-detection notifications.Best Black Friday TV dealsShop all the best Black Friday TV deals before they are gone. Now's the time to replace your old and busted TV and upgrade to something with better resolution and features that will take your binging sessions to the next level. Hisense/CNET "Go big or go home" could be this smart TV's motto. It's 85 inches, making it almost too big for most spaces, but just right for those who love watching movies large and loud or like to catch all the details of any sport you can imagine. Its smarts come from Google TV and it has a subwoofer built-in to make the sound experience better in almost every case. Show more $1,298 at Amazon TCL 55 Inch QLED 4K Google TV: $350 This TCL 55-inch TV uses QLED technology to offer crisp 4K images. Its smarts come from Google TV, so it has the apps you know and love while also having casting built-in. It even has a 120hz mode, perfect for high-performance gaming. Details $350 at Amazon0 Commentarios 0 Acciones 2 Views