• filmandmusic: React Engineer(Front-End)
    weworkremotely.com
    Role DescriptionWere hiring a junior to mid-level React engineer to work on our web applications. These webapplications are the way that thousands of creative people around the world find inspiringcontent for use in their projects and campaigns.As part of the Engineering team, your time will be spent collaborating with other front endengineers, back end engineers, product managers and designers to help create the bestexperience for our customers. The ideal candidate loves tinkering with cutting edgetechnology and has at least 2 years experience building React applications. You will be able tocode out features in React, create quality pull requests and help our front end team staycurrent with best industry practices.Here are some example projects that we have worked on recently:Build a custom audio player that includes the ability to track progress, expands to show additional details and allows users to easily explore our artist catalog.Enhance the purchase flow with additional payment options and a seamless checkout.Help maintain a beautiful custom React component library, including reusable components with tailored functionality and consistent design attributes.The engineering team has a remote-first culture. As such, you will be comfortable workingremotely, possess excellent verbal and written communication skills and be able to manageyour own time.Were after individuals that are curious about the possibility of technology, are eager to learn,and are diligent and kind. You should be able to work as a team member, take ownership ofyour work, and contribute to team discussions. Our teams work well because we place trustin them to succeed. We believe in healthy debate and that great ideas can come fromanybody. Youll have plenty of opportunities to add your own input in our software.A great candidate will have:At least 2 years experience developing front end applications with ReactA love of Typescript and unit-tested React components.Highly skilled in HTML and CSS (including responsive design and cross-browser compatibility).Excellent communication skills (written and verbal).Pays attention to detail and overall application design.Enjoys taking a technical spec and building the functional pieces to make it work.Experience with GIT.Experience in Next.js, Vite, CSS in JS, and React Native would be a bonus.
    0 Comments ·0 Shares ·78 Views
  • Three things you NEED to start doing as a dev
    www.youtube.com
    Three things you NEED to start doing as a dev
    0 Comments ·0 Shares ·105 Views
  • This manga publisher is using Anthropics AI to translate Japanese comics into English
    www.technologyreview.com
    A Japanese publishing startup is using Anthropics flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the two to three months it would take a team of humans. Orange was founded by Shoko Ugaki, a manga superfan who (according to VP of product Rei Kuroda) has some 10,000 titles in his house. The company now wants more people outside Japan to have access to them. I hope we can do a great job for our readers, says Kuroda. Orange's Japanese-to-English translation of Neko Oji: Salaryman reincarnated as a kitten!IMAGES COURTESY ORANGE / YAJIMA But not everyone is happy. The firm has angered a number of manga fans who see the use of AI to translate a celebrated and traditional art form as one more front in the ongoing battle between tech companies and artists. However well-intentioned this company might be, I find the idea of using AI to translate manga distasteful and insulting, says Casey Brienza, a sociologist and author of the book Manga in America: Transnational Book Publishing and the Domestication of Japanese Comics. Manga is a form of Japanese comic that has been around for more than a century. Hit titles are often translated into other languages and find a large global readership, especially in the US. Some, like Battle Angel Alita or One Piece, are turned into anime (animated versions of the comics) or live-action shows and become blockbuster movies and top Netflix picks. The US manga market was worth around $880 million in 2023 but is expected to reach $3.71 billion by 2030, according to some estimates. Its a huge growth market right now, says Kuroda. Orange wants a part of that international market. Only around 2% of titles published in Japan make it to the US, says Kuroda. As Orange sees it, the problem is that manga takes human translators too long to translate. By building AI tools to automate most of the tasks involved in translationincluding extracting Japanese text from a comics panels, translating it into English, generating a new font, pasting the English back into the comic, and checking for mistranslations and typosit can publish a translated mange title in around one-tenth the time it takes human translators and illustrators working by hand, the company says. Humans still keep a close eye on the process, says Kuroda: Honestly, AI makes mistakes. It sometimes misunderstands Japanese. It makes mistakes with artwork. We think humans plus AI is whats important. Superheroes, aliens, cats Manga is a complex art form. Stories are told via a mix of pictures and words, which can be descriptions or characters voices or sound effects, sometimes in speech bubbles and sometimes scrawled across the page. Single sentences can be split across multiple panels. There are also diverse themes and narratives, says Kuroda: Theres the student romance, mangas about gangs and murders, superheroes, aliens, cats. Translations must capture the cultural nuance in each story. This complexity makes localization work highly challenging, he says. Orange often starts with nothing more than the scanned image of a page. Its system first identifies which parts of the page show Japanese text, copies it, and erases the text from each panel. These snippets of text are then combined into whole sentences and passed to the translation module, which not only translates the text into English but keeps track of where on the page each individual snippet comes from. Because Japanese and English have a very different word order, the snippets need to be reordered, and the new English text must be placed on the page in different places from where the Japanese equivalent had come fromall without messing up the sequence of images. Generally, the images are the most important part of the story, says Frederik Schodt, an award-winning manga translator who published his first translation in 1977. Any language cannot contradict the images, so you cant take many of the liberties that you might in translating a novel. You cant rearrange paragraphs or change things around much. Orange's Japanese-to-English translation of Neko Oji: Salaryman reincarnated as a kitten!IMAGES COURTESY ORANGE / YAJIMA Orange tried several large language models, including its own, developed in house, before picking Claude 3.5. Were always evaluating new models, says Kuroda. Right now Claude gives us the most natural tone. Claude also has an agent framework that lets several sub-models work together on an overall task. Orange uses this framework to juggle the multiple steps in the translation process. Orange distributes its translations via an app called Emaqi (a pun on emaki, the ancient Japanese illustrated scrolls that are considered a precursor to manga). It also wants to be a translator-for-hire for US publishers. But Orange has not been welcomed by all US fans. When it showed up at Anime NYC, a US anime convention, this summer, the Japanese-to-English translator Jan Mitsuko Cash tweeted: A company like Orange has no place at the convention hosting the Manga Awards, which celebrates manga and manga professionals in the industry. If you agree, please encourage @animenyc to ban AI companies from exhibiting or hosting panels. Brienza takes the same view. Work in the culture industries, including translation, which ultimately is about translating human intention, not mere words on a page, can be poorly paid and precarious, she says. If this is the way the wind is blowing, I can only grieve for those who will go from making little money to none. Some have also called Orange out for cutting corners. The manga uses stylized text to represent the inner thoughts that the [protagonist] cant quite voice, another fan tweeted. But Orange didnt pay a redrawer or letterer to replicate it properly. They also just skip over some text entirely. Orange distributes its translations via an app called Emaqi (available only in the US and Canada for now)EMAQI Everyone at Orange understands that manga translation is a sensitive issue, says Kuroda: We believe that human creativity is absolutely irreplaceable, which is why all AI-assisted work is rigorously reviewed, refined, and finalized by a team of people. Orange also claims that the authors it has translated are on board with its approach. "Im genuinely happy with how the English version turned out, says Kenji Yajima, one of the authors Orange has worked with, referring to the companys translation of his title Neko Oji: Salaryman reincarnated as a kitten! (see images). As a manga artist, seeing my work shared in other languages is always exciting. Its a chance to connect with readers I never imagined reaching before. Schodt sees the upside too. He notes that the US is flooded with poor-quality, unofficial fan-made translations. The number of pirated translations is huge, he says. Its like a parallel universe. He thinks using AI to streamline translation is inevitable. Its the dream of many companies right now, he says. But it will take a huge investment. He believes that really good translation will require large language models trained specifically on manga: Its not something that one small company is going to be able to pull off. Whether this will prove economically feasible right now is anyones guess, says Schodt. There is a lot of advertising hype going on, but the readers will have the final judgment.
    0 Comments ·0 Shares ·60 Views
  • What the departing White House chief tech advisor has to say on AI
    www.technologyreview.com
    President Bidens administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the presidents executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation). The incoming Trump administration has not presented a clear thesis of how it will handle AI, but plenty of people in it will want to see that executive order nullified. Trump said as much in July, endorsing the 2024 Republican Party Platform that says the executive order hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology. Venture capitalist Marc Andreessen has said he would support such a move. However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios, and has been supportive of some regulations aiming to promote AI safety. As she prepares for the end of the administration, I sat down with Prabhakar and asked her to reflect on President Bidens AI accomplishments, and how AI risks, immigration policies, the CHIPS Act and more could change under Trump. This conversation has been edited for length and clarity. Every time a new AI model comes out, there are concerns about how it could be misused. As you think back to what were hypothetical safety concerns just two years ago, which ones have come true? We identified a whole host of risks when large language models burst on the scene, and the one that has fully manifested in horrific ways is deepfakes and image-based sexual abuse. Weve worked with our colleagues at the Gender Policy Council to urge industry to step up and take some immediate actions, which some of them are doing. There are a whole host of things that can be donepayment processors could actually make sure people are adhering to their Terms of Use. They don't want to be supporting [image-based sexual abuse] and they can actually take more steps to make sure that they're not. There's legislation pending, but that's still going to take some time. Have there been risks that didn't pan out to be as concerning as you predicted? At first there was a lot of concern expressed by the AI developers about biological weapons. When people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there's a marginally worse risk, but it is marginal. If you haven't been thinking about how bad actors can do bad things, then the chatbots look incredibly alarming. But you really have to say, compared to what? For many people, theres a knee-jerk skepticism about the Department of Defense or police agencies going all in on AI. I'm curious what steps you think those agencies need to take to build trust. If consumers don't have confidence that the AI tools they're interacting with are respecting their privacy, are not embedding bias and discrimination, that they're not causing safety problems, then all the marvelous possibilities really aren't going to materialize. Nowhere is that more true than national security and law enforcement. I'll give you a great example. Facial recognition technology is an area where there have been horrific, inappropriate uses: take a grainy video from a convenience store and identify a black man who has never even been in that state, who's then arrested for a crime he didn't commit. (Editors note: Prabhakar is referring to this story). Wrongful arrests based on a really poor use of facial recognition technology, that has got to stop. In stark contrast to that, when I go through security at the airport now, it takes your picture and compares it to your ID to make sure that you are the person you say you are. That's a very narrow, specific application that's matching my image to my ID, and the sign tells meand I know from our DHS colleagues that this is really the casethat they're going to delete the image. That's an efficient, responsible use of that kind of automated technology. Appropriate, respectful, responsiblethat's where we've got to go. Were you surprised at the AI safety bill getting vetoed in California? I wasn't. I followed the debate, and I knew that there were strong views on both sides. I think what was expressed, that I think was accurate, by the opponents of that bill, is that it was simply impractical, because it was an expression of desire about how to assess safety, but we actually just don't know how to do those things. No one knows. It's not a secret, it's a mystery. To me, it really reminds us that while all we want is to know how safe, effective and trustworthy a model is, we actually have very limited capacity to answer those questions. Those are actually very deep research questions, and a great example of the kind of public R&D that now needs to be done at a much deeper level. Lets talk about talent. Much of the recent National Security Memorandum on AI was about how to help the right talent come from abroad to the US to work on AI. Do you think we're handling that in the right way? It's a hugely important issue. This is the ultimate American story, that people have come here throughout the centuries to build this country, and it's as true now in science and technology fields as it's ever been. We're living in a different world. I came here as a small child because my parents came here in the early 1960s from India, and in that period, there were very limited opportunities [to emigrate to] many other parts of the world. One of the good pieces of news is that there is much more opportunity now. The other piece of news is that we do have a very critical strategic competition with the People's Republic of China, and that makes it more complicated to figure out how to continue to have an open door for people who come seeking America's advantages, while making sure that we continue to protect critical assets like our intellectual property. Do you think the divisive debates around immigration, especially around the time of the election, may hurt the US ability to bring the right talent into the country? Because we've been stalled as a country on immigration for so long, what is caught up in that is our ability to deal with immigration for the STEM fields. It's collateral damage. Has the CHIPS Act been successful? I'm a semiconductor person starting back with my graduate work. I was astonished and delighted when, after four decades, we actually decided to do something about the fact that semiconductor manufacturing capability got very dangerously concentrated in just one part of the world [Taiwan]. So it was critically important that, with the President's leadership, we finally took action. And the work that the Commerce Department has done to get those manufacturing incentives out, I think they've done a terrific job. One of the main beneficiaries so far of the CHIPS Act has been Intel. There's varying degrees of confidence in whether it is going to deliver on building a domestic chip supply chain in the way that the CHIPS Act intended. Is it risky to put a lot of eggs in one basket for one chip maker? I think the most important thing I see in terms of the industry with the CHIPS Act is that today we've got not just Intel, but TSMC, Samsung, SK Hynix and Micron. These are the five companies whose products and processes are at the most advanced nodes in semiconductor technology. They are all now building in the US. There's no other part of the world that's going to have all five of those. An industry is bigger than a company. I think when you look at the aggregate, that's a signal to me that we're on a very different track. You are the Presidents chief advisor for science and technology. I want to ask about the cultural authority that science has, or doesnt have, today. RFK Jr. is the pick for health secretary, and in some ways, he captures a lot of frustration that Americans have about our healthcare system. In other ways, he has many views that can only be described as anti-science. How do you reflect on the authority that science has now? I think it's important to recognize that we live in a time when trust in institutions has declined across the board, though trust in science remains relatively high compared with what's happened in other areas. But it's very much part of this broader phenomenon, and I think that the scientific community has some roles [to play] here. The fact of the matter is that despite America having the best biomedical research that the world has ever seen, we don't have robust health outcomes. Three dozen countries have longer life expectancies than America. That's not okay, and that disconnect between advancing science and changing people's lives is just not sustainable. The pact that science and technology and R&D makes with the American people is that if we make these public investments, it's going to improve people's lives and when that's not happening, it does erode trust. Is it fair to say that that gapbetween the expertise we have in the US and our poor health outcomesexplains some of the rise in conspiratorial thinking, in the disbelief of science? It leaves room for that. Then there's a quite problematic rejection of facts. It's troubling if you're a researcher, because you just know that whats being said is not true. The thing that really bothers me is [that the rejection of facts] changes people's lives, and it's extremely dangerous and harmful. Think about if we lost herd immunity for some of the diseases for which we right now have fairly high levels of vaccination. It was an ugly world before we tamed infectious disease with the vaccines that we have.
    0 Comments ·0 Shares ·59 Views
  • Bird-like LIJ Airport by MAD nears completion in Lishui, China
    worldarchitecture.org
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"MAD has revealed photos of the bird-like LIJ Airport nearing completion in the foothill valleys of Lishui, China.Named Lishui Airport, the 12,000-square-metre airport is set to open by the end of 2024. By fusing architecture with the surrounding natural landscape, the airport's design reflects the city's identity as a "forest city."Image Yongwei LiuDrawn in a bird-like shape, the airport is part of the 2,267-hectare project. The airport needed to level almost 100 meters of elevation in order to accommodate the steep terrain in the area. This resulted in a terraced layout that incorporates the terminal, parking, and office areas into descending platforms. This method guarantees a practical and effective design while honoring the land's natural contours. The terminals design conveys harmony with its environment.Image Hello Lishui"Lishui is a garden city, and her airport should also be in a garden," said Ma Yansong."As a feeder airport, Lishui Airport shows another attitude as a public transportation facility in the city: not greedy for big, but pursuing convenience and humanity, and pursuing a dialogue with the natural environment," Yansong added.Image JK WangThe aluminum panels that make up the terminal's silver-white roof, which is held up by 14 umbrella-shaped columns, reflect lightness like feathers. Its flowing shape keeps the building open while serving as an anchor.A central skylight illuminates the hospitable concourse created by the roof's 30-meter cantilever.Image MAD ArchitectsThe interior is human-scaled and features wood-toned finishes to welcome passengers. In order to balance intimacy and openness, the concourse height changes from 4.5 meters at its lowest point to 13 meters at its highest.Arrival and departure areas are combined into a small, effective space by the "one-and-a-half-story" layout, which also features a double-height lobby to ensure seamless passenger flow.Image MAD ArchitectsFrom the parking lot into the terminal, a beautifully designed walkway connects passengers to their surroundings and improves accessibility. Initially built as a domestic regional airport, Lishui can handle one million passengers a year and has three boarding bridges and five remote stands.Image MAD ArchitectsWith space for 1.8 million passengers by 2030 and up to 5 million by 2050, the design accounts for future expansion. To ensure that the airport can expand with the area, plans have been made for an international terminal.Image MAD ArchitectsImage MAD ArchitectsImage MAD ArchitectsImage MAD ArchitectsVentilation diagramCirculation diagramMasterplanMAD unveiled a preliminary design for the Lishui Airport in 2024. MAD revealed a mixed-use tower that features a cracked-open canyon in the heart of Denver, Colorado, United States. In addition, the Fenix Museum by MAD Architects topped out in Rotterdam with its iconic "Tornado", a dynamic structure that rises from the ground floor and flows out of the rooftop onto a platform.Project factsProject name:Lishui AirportLocation: Lishui, ChinaDate:2018 - 2024Site Area:2,267 hectaresBuilding Area:12,100 sqmBuilding Height: 23.95 mPrincipal Partners in Charge: Ma Yansong, Dang Qun, Yosuke HayanoAssociate Partners in Charge: Liu Huiying, Kin LiDesign Team: Sun Shouquan, Zhang Xiaomei, Peng Kaiyu, Yin Jianfeng, Zhu Yuhao, Yang Xuebing, Lei Lei, Sun Mingze, Luo Yiyun, Alan Rodrguez Carrillo, Punnin SukkasemClient:Lishui Airport Construction HeadquartersArchitectural Design:MAD ArchitectsExecutive Architects:CAAC NEW ERA AIRPORT DESIGN INSTITUTE COMPANY LIMITEDInterior Design:MAD Architects, Shanghai Xian Dai Architectural Decoration & Landscape Design Research Institute CO., LtdFaade Consultant:RFR ShanghaiLandscape Consultant:Zscape Landscape Planning and Design, Huadong Engineering Corporation LimitedArchitecture and Landscape Lighting Consultant:Ning field lighting design Corp., Ltd.Interior Lighting Consultant:Shanghai Xian Dai Architectural Decoration & Landscape Design Research Institute CO., LtdConstruction: Beijing Construction Engineering GroupVideo:JK WangAll image JK Wang, Liu Yongwei, Hello Lishui, MAD Architects.Top image JK Wang.All drawings MAD Architects.> via MAD Architects
    0 Comments ·0 Shares ·69 Views
  • Fosters sees income break through 400m figure
    www.bdonline.co.uk
    Source: Foster & PartnersFoster & Partners proposals for the Ellison Institute of Technology in OxfordThe countrys biggest architect saw turnover break the 400m barrier for the first time last year and staff numbers edge up to the 2,000 mark.Latest accounts filed by Foster & Partners at Companies House show that income was up 29% to 422m with staff numbers up 11% to 1,900.The firm, whose UK schemes include the Ellison Institute of Technologys research and development facility in Oxfordand a masterplan job as part of the redevelopment of Manchester Uniteds Old Trafford stadium, said pre-tax profit doubled to 2.3m in the year to April.Fosters earnings before amortisation, depreciation and a partnership payment of almost 20m to its 100-plus partners was 44m down from 55m last time.> Also read:Fosters serves up completed Spanish winery projectThe 19.8m partnership payment is shared between all partners and is in addition to an annual bonus, the firm said.The architects biggest business is the Middle East where it posted an income of 177m, up by half on last time and around 42% of its entire group workload. Income from its UK business stayed flat at 38m.> Also read:Fosters 750m London datacentre gets green lightFosters, whose largest shareholder is the Canadian private equity firm Hennick & Company which over the summer bought a stake in Gardiner & Theobald, said it would pay a 24m dividend, up from 14.6m last time.The accounts also reveal it spent 900,000 on restructuring costs.Fosters winery building in Spain for Rioja producer Bodegas Faustino, completed this year
    0 Comments ·0 Shares ·111 Views
  • VFX Supervisor David Lee discusses our work on the Venom trilogy
    www.dneg.com
    From 2D Supervisor on 2018s Venom to VFX Supervisor on The Last Dance, learn more about David Lees creative journey working on Sonys hit anti-hero trilogy!Just over a month ago, the final instalment of the Venom trilogy hit theatres and was, appropriately, the top movie worldwide for 3 weeks following its release!As lead VFX partner on the previous two films in the trilogy, DNEG was honoured to return once again for the final instalment. Our work was led by VFX Supervisor David Lee, who previously worked as 2D Supervisor on the first film! From creative challenges to new tools, we sat down with David to hear more about his journey with the franchise. Read it here:Hi Dave and thank you for joining us! How did it feel having your role evolve as it did throughout the films?There is always a sense of trepidation when moving roles, a healthy respect of the unknown which I think is something to be embraced. Theres quite a leap between watching people make decisions in roles above you, and then being the one ultimately responsible for similar decisions when you get that opportunity. Generally, however, there is a real excitement to keep facing new challenges and learn new skills. And helpfully, Ive always been lucky to have the support of a great group of people around me at every level throughout my career!Can you share your journey in the visual effects industry, and how you transitioned from a 2D Supervisor to a VFX Supervisor?I began in visual effects back in 2005 in New Zealand. Having graduated from Film School, I began filming my own short films before learning After Effects online, and moving into animated shorts. This ultimately led to me getting a place at Weta as a Compositor, helped by good timing and good fortune. From there, I freelanced around New Zealand for a few years, then moved over to the UK and worked on commercials before the urge to move into longer forms led me to Cinesite, where John Carter was in full swing. After a few years there, I moved over to DNEG in a Lead Comp role on Man of Steel, and eventually into 2D Supervisor on the original Venom, then DFX Supervisor on Tenet, and VFX Supervisor on Meg 2: The Trench and Venom: The Last Dance.What lessons did you learn from working on the first Venom that you applied to the final instalment?I think generally it was a more iterative process from the first films. Via the wonders of hindsight, we could make adjustments to the set ups that had challenged us last time!For example, during Look Development for Venom, we created a variant of him for shots where he is small in screenspace, which we gave a different specular response, as it was previously harder to achieve the classic Venom look in these examples. And really, its many smaller scale adjustments that holistically make a difference. We would know which attributes might be more prone to client notes, so we could ensure they were particularly robust while we had the time early on to do so.Have you seen any significant changes in visual effects technology or processes between the first and final films of the trilogy?The first Venom was released in 2018, so were looking at a time span of about 6 years. In that time, the two things that jump out are machine learning, and automation.Automation certainly is not a new process, but development of this within DNEG has come along dramatically in its scope and ease. Classically, this has always been done in a more manual sense. For example, in comp, a lead may create a template that can then be copied and passed around to artists to give a solid base for shot-specific work. In the last few years however, this type of workflow has been integrated much more deeply into the pipeline, with more advanced tools to allow individual artists to work on many more shots with a much higher level of consistency than has been achieved in the past. These artists can also maintain ownership of these shots for a much longer duration! Its a much more efficient and consistent workflow. It also is a boon for late-in-the-day sequence changes, as in an ideal situation, we only have to make that change once, and it will flow through a multitude of shots without derailing the schedule.Machine learning is coming along at a tremendous pace. This has moved from the realms of reading about it, to actively starting to use it. We are still at the very beginning of this technology, but even the use of the copycat tool in Nuke to bash out rough holdout mattes is something that didnt exist during the original film. We do have some way to go with these new tools to get the level of quality and flexibility we need for final feature use, but the time between iterations is drastically reducing and there will only be more of this in the coming years!What are some of the creative or technical challenges around working on a franchise like Venom, and how did you and your team overcome them?I think the main challenge of franchise films like Venom is making sure you are ready for anything. There are a number of different creative inputs in shows like these, so preparation is key to ensure that any curve balls that come down the line are able to be incorporated into the show with as little stress as possible for everyone involved. So, technology and tooling play a large part with this, and we placed additional emphasis on every department to ensure that the fundamental set ups were solid, and flexibility was built into every approach. Now obviously this isnt always possible, but it is a good rule to live by. Essentially, expect the unexpected.What advice would you give to aspiring VFX artists looking to work on big projects like this?I think anything you have passion for, you will put in time with, and naturally become more practised and skilled at. Keep referencing the world around you and use it for inspiration. It holds the answers to everything!A big thank you to David for his insights! Watch this space and follow us on social media for more Venom news.Craving more Venom? Enjoy this reel of (just some!) our favourite DNEG shots from each film!
    0 Comments ·0 Shares ·148 Views
  • CNET Shopping Experts Found 75+ Lingering Cyber Monday Deals to Shop While You Still Can
    www.cnet.com
    Our Experts Written by Adam Oram Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement Why You Can Trust CNET 16171819202122232425+ Years of Experience 14151617181920212223 Hands-on Product Reviewers 6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000 Sq. Feet of Lab Space CNETs expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.Table of Contents Table of Contents It's hard to believe we're pretty much at the end of Cyber Monday, but it's not over until the last deal disappears. As things wrap up, we're still seeing plenty of top tech and other amazing deals up for grabs. This is the last major sales event of 2024, so you really don't want to miss out on the chance to snag what you need ahead of the holidays at a deep discount. Black Friday delivered for bargain hunters in spades, and so has Cyber Monday. We've seen new record low prices on tons of laptops, TVs, tablets, mattresses, or headphonesthroughout the event and many are still around. We're keeping this page updated past the end of the day on Monday to bring you any lingering deals.This page serves as your personal deal-hunting companion. We're updating it regularly with Cyber Monday's remaining top sales items from CNET's shopping experts. Keep an eye on this space so you won't miss any amazing deals as things dwindle.See at AmazonBest Cyber Monday deals Amazon Echo Frames (3rd Gen) and Echo Pop bundle: $170 Amazon's Echo Pop is a solid entry-level smart speaker and bundled with the Echo Frames, you won't see a better deal. With Alexa built into your glasses, you'll have access to all you need, including those epic playlists while on the go. Details $170 at Amazon Samsung Galaxy S24 Ultra (512GB): $1070 If you've been wanting to upgrade your phone, the Samsung Galaxy S24 Ultra is 25% off right now. The S24 has a sharp 5x optical zoom, fast processor and a beautiful display so you'll have plenty of room to play with Samsung's new Galaxy AI features. This is also one of our favorite Samsung phones of the year. Details $1,070 at Amazon Apple AirPods Max: $399 If you want the sound quality of the AirPods Pro with the comfort and security of over-ear headphones, the AirPods Max are for you. Details $399 at Amazon Samsung Galaxy Watch 7: $203 Get the best Android smartwatch experience at its lowest price yet. Even if last year's Galaxy Watch 6 is down to $140, the improved health sensor array, the smoother yet more efficient processor and the new gesture controls on the Galaxy Watch 7 can make all the difference in the world in everyday use, especially for those with extra-small or irregularly shaped wrists. Samsung's Galaxy Health suite remains entirely free -- unlike Fitbit on the Pixel Watch 3 -- and its integration with both Samsung Galaxy phones and non-Samsung Android phones is top tier. Samsung's customizable watch faces, like the new Ultra Info Board or the updated (and GIF-supporting) Photos face, let your watch feel as futuristic, retro or personal as you desire. Details $203 at Amazon Dyson V8 Cordless Vacuum Cleaner: $329 The Dyson V8 is a somewhat older model but still an extremely reliable, powerful Dyson vacuum for hundreds less than the brand's latest cleaning tech. Details $329 at Amazon Brecious juicer: $60 Back down to its all-time low price, this juicer is perfect for someone starting out in their juicing journey as it's easy to set up, use and clean. It has two speeds and a quiet mode. The small feed chute is even safe enough for children to use. Details $60 at Amazon LG 55-inch G4 OLED Evo TV: $1,797 A 55-inch display is large enough for most spaces, and this particular model is equipped with an OLED screen. It also has Dolby Atmos sound, Dolby Vision, an A11 AI processor with upscaling and gaming features such as Nvidia G Sync, AMD FreeSync Premium and up to 144Hz refresh rate. Snag this TV while it's at an all-time low price. Details $1,797 at Amazon Echo Show 8 and Blink video doorbell bundle: $90 One of Amazon's best smart home deals offers up both the Echo Show 8 (3rd Gen) as well as the Blink video doorbell in black for 57% off. This bundle is a great way to get access to a great screen for streaming the likes of Netflix alongside a handy camera to keep an eye on things on your doorstep. Details $90 at Amazon Kindle Paperwhite Signature Edition: $155 The latest and greatest ebook reader from Amazon is good for a marathon reading session with its 12 weeks of battery life. Better still, the ebook reader has a refreshed display with even faster page turns and a perfect paper-like contrast ratio. It's the perfect reading companion. Details $155 at Amazon Razer Viper V2 Pro HyperSpeed wireless gaming mouse: $80 (save $70). This mouse is a solid tool for competitive PC gaming thanks to its lightweight edge and it's now at the lowest price we've seen in 2024.Anker Prime Charger block: $46 (save $39). This foldable and compact 100-watt USB-C charger has two USB-C ports and a USB-A port.Monster 100-foot smart LED light strip: $18 (save $17). This LED light strip supportsRazer Chroma, so gamers can sync the strip with their computer and PC peripherals. It also can sync to music and supports Alexa, Google Assistant and Siri. Aqara U100 smart door lock: $130 (save $100). This smart lock supports Apple's Home Key technology, Amazon's Alexa and more.Govee TV backlight 3: $47 (save $23). This kit adds lights to the back of your TV that sync up with whatever you are watching or playing, and kits for multiple TV sizes are all discounted today.Roku Streambar SE: $69 (save $31). Roku's 2-in-1 sound and streaming bar is over 30% off. CNET's David Carnoy recommends the Streambar SE from Roku as a "solid upgrade over most built-in TV speakers."
    0 Comments ·0 Shares ·61 Views
  • Today's NYT Mini Crossword Answers for Tuesday, Dec. 3
    www.cnet.com
    Looking forthe most recentMini Crossword answer?Click here for today's Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands and Connections puzzles.TheNew York Times Crossword Puzzleis legendary. But if you don't have that much time, theMini Crosswordis an entertaining substitute. The Mini Crossword is much easier than the old-school NYT Crossword, and you probably can complete it in a couple of minutes. But if you're stuck, we've got the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.The Mini Crossword is just one of many games in the Times' games collection. If you're looking for today's Wordle, Connections and Strands answers, you can visitCNET's NYT puzzle hints page.Read more: Tips and Tricks for Solving The New York Times Mini CrosswordLet's get at those Mini Crossword clues and answers. The completed NYT Mini Crossword puzzle for Dec. 3, 2024. NYT/Screenshot by CNETMini across clues and answers1A clue: Physicians' degreesAnswer: MDS4A clue: Good name for a theology professorAnswer: FAITH6A clue: Space-related prefixAnswer: ASTRO7A clue: Still in bedAnswer: NOTUP8A clue: Garden figurine in a pointy hatAnswer: GNOMEMini down clues and answers1D clue: Worker with bricks and mortarAnswer: MASON2D clue: "Me, too!"Answer: DITTO3D clue: Play, as a mandolinAnswer: STRUM4D clue: One of two in a king cobra's mouthAnswer: FANG5D clue: Good name for an optimistAnswer: HOPEHow to play more Mini CrosswordsThe New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day's Mini Crossword for free, but you'll need a subscription to the Times Games section to play older puzzles from the archives.
    0 Comments ·0 Shares ·60 Views
  • What Caused This Seven-Mile Scar in Australia's Outback?
    www.scientificamerican.com
    December 2, 20244 min readWhat Caused This Seven-Mile Scar in Australia's Outback Seen on Google Earth?A man scouring Google Earth found a mysterious scar in the Australian outback. And now scientists know what caused itBy Matej Lipar & The Conversation USThis Google Earth image shows a mysterious scar etched into Australia's barren landscape. Google EarthThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.Earlier this year, a caver was poring over satellite images of the Nullarbor Plain when he came across something unexpected: an enormous, mysterious scar etched into the barren landscape.The find intrigued scientists, including my colleagues and I. Upon closer investigation, we realised the scar was created by a ferocious tornado that no-one knew had occurred. We outline the findings in new research published today.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Tornadoes are a known threat in the United States and elsewhere. But they also happen in Australia.Without the power of technology, this remarkable example of natures ferocity would have gone unnoticed. Its important to study the tornados aftermath to help us predict and prepare for the next big twister.Tornadoes are not just a U.S. phenomenon; they can occur in Australia, too.Grey Zone/Alamy Stock PhotoAustralias tornado historyTornadoes are violent, spinning columns of air that drop from thunderstorms to the ground, bringing wind speeds often exceeding 200 kilometres an hour. They can cause massive destruction uprooting trees, tearing apart buildings and throwing debris over large distances.Tornadoes have been reported on every continent except Antarctica. They most commonly occur in the Great Plains region of the United States, and in the north-east region of IndiaBangladesh.The earliest tornado observed by settlers in Australia occurred in 1795 in the suburbs of Sydney. But a tornado was not confirmed here by Western scientists until the late 1800s.In recent decades, documented instances in Australia include a 2013 tornado that crossed north-east Victoria and travelled up to the New South Wales border. It brought winds between 250300 kilometres an hour and damaged Murray River townships.And in 2016, a severe storm produced at least seven tornadoes in central and eastern parts of South Australia.Its important for scientists to accurately predict tornadoes, so we can issue warnings to communities. Thats why the Nullarbor tornado scar was useful to study.A whirlwind mysteryThe Nullarbor Plain is a remote, dry, treeless stretch of land in southern Australia. The man who discovered the scar had been using Google Earth satellite imagery to search the Nullabor for caves or other karst features.Karst is a landscape underlain by limestone featuring distinctive landforms. The discovery of the scar came to the attention of my colleagues and I through the collaborative network of researchers and explorers who study the Nullarbor karst.The scar stretches from Western Australia over the border to South Australia. It lies 20 kilometres north of the Trans-Australian Railway and 90 kilometres east-north-east of Forrest, a former railway settlement.We compared satellite imagery of the site over several years to determine that the tornado occurred between November 16 and 18, 2022. Blue circular patterns appeared alongside the scar, indicating pools of water associated with heavy rain.My colleagues and I then travelled to the site in May this year to examine and photograph the scar and the neighbouring landscape.Our results have been published in the Journal of Southern Hemisphere Earth Systems Science.What we foundThe scar is 11 kilometres long and between 160 and 250 metres wide. It bears striking patterns called cycloidal marks, formed by tornado suction vortexes. This suggests the tornado was no ordinary storm but in the strong F2 or F3 category, spinning with destructive winds of more than 200 kilometres an hour.The tornado probably lasted between seven and 13 minutes. Features of the scar suggest the whirling wind within the tornado was moving in a clockwise direction. We also think the tornado moved from west to east which is consistent with the direction of a strong cold front in the region at the time."Cycloidal marks" can be seen in the tornado scar, caused by multiple vortexes, in the Australian outback.Google EarthLocal weather observations also recorded intensive cloud cover and rainfall during that period in November 2022.Unlike tornadoes that hit populated areas, this one did not damage homes or towns. But it left its mark nonetheless, eroding soil and vegetation and reshaping the Earths surface.Remarkably, the scar was still clearly visible 18 months after the event, both in satellite images and on the ground. This is probably because vegetation grows slowly in this dry landscape, so hadnt yet covered the erosion.Predict and prepareThis fascinating discovery on the Nullarbor Plain shows how powerful and unpredictable nature can be sometimes without us knowing.Only three tornadoes have previously been documented on the Nullarbor Plain. This is likely because the area is remote with few eye-witnesses, and because the events do not damage properties and infrastructure. Interestingly, those three tornadoes occurred in November, just like this one.Our research provides valuable insights into the tornadoes in this remote and little-studied region. It helps us understand when, and in what conditions, these types of tornadoes occur.It also emphasises the importance of satellite imagery in identifying and analysing weather phenomena in remote locations, and in helping us predict and prepare for the next big event.And finally, the results are a stark reminder that extreme weather can strike anywhere, anytime.This article was originally published on The Conversation. Read the original article.
    0 Comments ·0 Shares ·62 Views