• We want to pay it forward: Funding Societies raises $25M to boost capital for SMEs in Southeast Asia
    techcrunch.com
    Small and medium-sized enterprises (SMEs) account for nearly 50% of Southeast Asias GDP, contributing to job creation, innovation, and overall economic expansion. Nevertheless, as in other parts of the world, SMEs in Southeast Asia face challenges when it comes to sufficient working capital. In a nutshell, SMEs are typically deemed too risky for traditional banks to lend to them, so those banks charge high rates, if they approve them at all.Kelvin Teo and Reynold Wijaya, two entrepreneurs from Southeast Asia who met while both were getting graduate degrees at Harvard Business School (HBS), were acutely aware of that gap back home. Inspired by HBS stated mission to make a difference in the world, they set out to address it.We had grown up as underdogs, felt privileged to be at HBS and wanted to pay it forward to Southeast Asia, Teo said in an interview with TechCrunch. SMEs resonate with us and financing is their biggest pain point.Their startup, Funding Societies, is a Singapore-based SME lending platform with licensed and registered offices in Indonesia, Malaysia, Thailand, and Vietnam. On the back of strong growth across the region to date its loaned more than $4 billion to over 100,000 businesses the fintech startup has been on a funding tear, too, most recently raising $25 million in equity.The investment comes from a single investor: Cool Japan Fund (CJF), Japans sovereign wealth fund. Notably, this marks the funds first investment in a fintech company in Southeast Asia. The recent funding brings the total raised by Funding Societies to approximately $250 million in equity. Investors have included strategic backers such as Khazanah Nasional Berhad and Maybank, which put in $40 million less than a year ago, as well as SoftBank Vision Fund 2, CGC Digital, SBVA (previously SoftBank Ventures Asia), Peak XV Partners (formerly known Sequoia Capital India), and Alpha JWC Ventures, among others.Funding Societies was founded in Singapore in 2015 on the back of the two founders collective backgrounds. Teo previously worked at Accenture, McKinsey, and KKR Capstone, while Wijaya had experience in a family business in Indonesia. After deciding to build a business to work with SMEs, the duo spent around three years researching the most groundbreaking companies in the U.S., analyzing their journey to the top.The company says that it has loaned more than $4 billion in business financing to date to around 100,000 SMEs across its five Southeast Asian countries. This is up from $3 billion in April 2023. Additionally, it has generated an annualized payment gross transaction value (GTV) of more than $1.4 billion since expanding into its payments business in 2022.The startup plans to use the money to expand its primary focus, providing financing services faster to SMEs in Singapore, Indonesia, Malaysia, Thailand, and Vietnam. It is also investing in AI to digitize and automate the lending application process and grow its payments business, which was launched in 2022.On top of that, through a partnership with CJF, it will offer financial services to back Japanese companies that are already operating businesses, or looking to expand their presence in Southeast Asia, or entering new markets in Southeast Asia, Teo told TechCrunch.The startup provides a wide range of financing options, including term loans, micro-loans, receivable/payable financing, revolver loans, and asset-backed business loans, ranging from $500 to $2 million, to meet the diverse needs of businesses at different stages. Many companies use the funds for working capital or as bridge loans to scale up.One of the things that sets the startup apart from competitors like Validus and Bluecell Intelligence is that it offers a one-stop shop service, from short-term financing to supply chain financing, via online and offline channels and partnerships, and payment offerings, according to the company CEO.Revenue from digital financial services in Southeast Asia is expected to rise, with digital lending leading the way and making up about 65% of the total revenue, according to an e-Conomy SEA Report 2024.More consolidations expected in credit fintech in SEASince a mammoth $144 million Series C+ funding round led by SoftBank Vision Fund 2 in February 2022, the Southeast Asia SME lending market has significantly consolidated, making the startup even stronger as a market leader, claimed Teo.Ironically, one companys crisis could become Funding Societies gain. Teo said the company expects more consolidation among fintechs focusing on credit in Southeast Asia. That is because many companies are getting to the end of their runways and unable to raise more money in the still-slugging SEA funding climate. Those that have focused on single countries are especially vulnerable, he added. Since SoftBank Vision Funds investment in February 2022, the macro market has changed considerably, with U.S. banks collapsing, impacting credit supply to non-bank lenders, Teo told TechCrunch. U.S. rate hikes have also raised the cost of funds. Up until September, the macro market faced a 23-year period of rate hikes, and geopolitics have hurt SMEs and raised non-performing loans, he added.In this challenging period, in December 2022, the company made its first acquisition: Sequoia-backed payments fintech CardUp. This almost tripled its revenue while maintaining its headcount almost flat. Teo noted also that the startup made investments in three companies in the period, including a fintech company and a startup specializing in POS software.A social and economic impact report that the startup collaborated on with the Asian Development Bank (ADB) in 2020 found that Funding Societies-backed MSMEs contributed $3.6 billion to GDP and created approximately 350,000 new jobs. In addition, it helped SMEs boost their revenue by 13% through quick disbursement and a simple application process, according to the company.Topics
    0 Commenti ·0 condivisioni ·66 Views
  • Exclusive: Googles Gemini is forcing contractors to rate AI responses outside their expertise
    techcrunch.com
    Generative AI may look like magic, but behind the development of these systems are armies of employees at companies like Google, OpenAI, and others, known as prompt engineers and analysts, who rate the accuracy of chatbots outputs to improve their AI.But a new internal guideline passed down from Google to contractors working on Gemini, seen by TechCrunch, has led to concerns that Gemini could be more prone to spouting out inaccurate information on highly sensitive topics, like healthcare, to regular people.To improve Gemini, contractors working with GlobalLogic, an outsourcing firm owned by Hitachi, are routinely asked to evaluate AI-generated responses according to factors like truthfulness.These contractors were until recently able to skip certain prompts, and thus opt out of evaluating various AI-written responses to those prompts, if the prompt was way outside their domain expertise. For example, a contractor could skip a prompt that was asking a niche question about cardiology because the contractor had no scientific background.But last week, GlobalLogic announced a change from Google that contractors are no longer allowed to skip such prompts, regardless of their own expertise.Internal correspondence seen by TechCrunch shows that previously, the guidelines read: If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.But now the guidelines read: You should not skip prompts that require specialized domain knowledge. Instead, contractors are being told to rate the parts of the prompt you understand and include a note that they dont have domain knowledge.This has led to direct concerns about Geminis accuracy on certain topics, as contractors are sometimes tasked with evaluating highly technical AI responses about issues like rare diseases that they have no background in.I thought the point of skipping was to increase accuracy by giving it to someone better? one contractor noted in internal correspondence, seen by TechCrunch.Contractors can now only skip prompts in two cases: if theyre completely missing information like the full prompt or response, or if they contain harmful content that requires special consent forms to evaluate, the new guidelines show.Google did not respond to TechCrunchs requests for comment by press time.
    0 Commenti ·0 condivisioni ·66 Views
  • Canoo furloughs workers and idles factory as it scrapes for cash
    techcrunch.com
    Struggling EV startup Canoo says it has furloughed 82 employees and is idling its factory in Oklahoma while it grasps for the capital needed to survive. The company claims it is in advanced discussions with various capital sources to raise emergency funding.The announcement comes just a few days after board member James Chen resigned, and roughly one month after the company saw its chief financial officer and head lawyer depart. Canoo is also facing multiple lawsuits from suppliers over alleged late payments.The new furloughs cap what has been a rough year for the startup. The company has undergone multiple rounds of layoffs and furloughs, and closed the Los Angeles office that used to serve as its headquarters. Canoos chief technology officer left in August, and all of the companys founders are now gone. In the meantime, it has been kept afloat by loans from the venture firm run by its CEO, Tony Aquila.Its unclear what Canoo was making at its facility in Oklahoma before deciding to pause operations there. So far, the company has delivered electric vans to NASA, USPS, Walmart, and the Department of Defense for testing. But it has failed at its broader ambitions of ramping up manufacturing for other commercial customers.In an unsigned statement, Canoo said: We regret having to furlough our employees, especially during the holidays, but we have no choice at this point. We are hopeful that we will be able to bring them back to work soon. Aquila did not immediately respond to a request for comment.
    0 Commenti ·0 condivisioni ·70 Views
  • Designing Sci-Fi Props for Film
    thegnomonworkshop.com
    Photoshop, Fusion, Maya & Substance Painter Workflow with Kris TurveyIn this 3-hour workshop, concept artist Kris Turvey demonstrates his complete workflow for designing and modeling a sci-fi prop for production. Beginning with his mind-mapping process to generate ideas, he explains how to develop concepts quickly with quick sketching in Photoshop. He then demonstrates how simple techniques using the Airbrush and Selection tools can take your ideas to a more advanced presentation stage.Moving on from Photoshop, Kris takes a selected design into Fusion, showcasing the various parametric 3D modeling techniques he uses to create fabrication-ready 3D models. To complete the workflow, he then demonstrates how to take your final Fusion model into Substance Painter to apply materials and generate final presentation renders efficiently.Throughout the workshop, Kris shares an in-depth look at his key design principles for designing objects that quickly and effectively communicate their functionality to an audience. By the end of the tutorial, artists will have developed a sound understanding of the professional techniques needed to underpin strong designs and will understand how to develop a solid pipeline for future concept design projects.Kriss final Fusion file is included as a downloadable project file with this workshop.WATCH NOW
    0 Commenti ·0 condivisioni ·87 Views
  • Modeling For Film & TV: Hard-Surface Vehicles
    thegnomonworkshop.com
    Workflow Tips & Tricks using Maya, Redshift & Photoshop with Josh DochertyIf you want to push your models to the industry gold standard, this workshop will show you how professional modelers working in film and TV create world-class production-quality vehicles.When modeling assets in the visual effects industry, artists are often supplied with scans or photogrammetry of the subject to retopologize and match the references. However, there are frequently times when this isnt the case, and in those instances, its essential to know what to do and how to tackle these scenarios working from limited references only which is expected of every modeler working on complex shows and often under very quick turnarounds.In this 3-hour workshop, Josh Docherty takes you through the process of modeling a high-resolution vehicle without a scan, relying on the use of camera lineups and using his tips, tricks, and industry knowledge to create the same high-quality, pipeline-ready asset in a limited amount of time.This tutorial teaches how to create a clean, robust model for downstream use by other departments, including Texturing/Lookdev, Rigging, Animation, FX, and Lighting. Different departments often have many requirements and needs that modelers are responsible for ensuring are considered and fulfilled to the best of their abilities. This helps the entire pipeline perform efficiently for all things model-related.Josh demonstrates many of the best practices and actions artists can take to ensure assets function at their best, using native Maya tools to achieve the same results at home all while adhering to the same rules as the largest pipelines in the industry. This workshop is best suited for junior-level modelers and above.The references used for this workshop are fromBring a Trailer, and the HDRI featured is fromPolyHaven. Josh uses PureRef, Maya, Arnold, Redshift (optional), Lightroom & Photoshop as his tools of choice, though the techniques demonstrated can be applied to other software.WATCH NOW
    0 Commenti ·0 condivisioni ·95 Views
  • Maya for Animators: Body Mechanics
    thegnomonworkshop.com
    Fundamental Animation Principles with Erik A. CastilloIn this intermediate-level workshop, following on fromIntroduction to Maya for Animators, Erik Castillo explores the core principles that bring animated characters to life. Using Maya, Erik focuses on understanding and applying fundamental animation concepts like squash and stretch, anticipation, timing, balance, and follow-through to create realistic and expressive movements.Designed for those familiar with Maya, this workshop aims to help animators elevate their understanding of how body mechanics affect a characters movement and overall performance. Through practical, hands-on exercises, artists will explore the process of creating believable actions such as lifting objects, standing, walking cycles, and more.The workshop includes an analysis of famous animation sequences to see how professionals tackle complex body mechanics. By the end, students will clearly grasp how to make characters move naturally and with intent, using the principles discussed to refine and polish their animation sequences.Whether youre looking to solidify your fundamentals or gain new insight into body mechanics, this workshop offers the essential building blocks to improve your animation skills and deepen your understanding of character movement in CG.The demonstrations in the workshop utilizeProRigs.WATCH NOW
    0 Commenti ·0 condivisioni ·93 Views
  • Eve Chauvet and Bertrand Cabrol Join The Yard VFX
    www.awn.com
    The Yard VFX has hired CG supervisors Eve Chauvet and Bertrand Cabrol. Both newcomers to the company boast expertise and extensive international experience at leading studios, most notably in Canada.With over 20 years in the industry, Chauvet has built a career working in Amsterdam, London, Vancouver, and Montreal. She honed her expertise as a 2D and 3D artist, and environment/CG supervisor at MPC, Framestore, and DNEG. Cabrol brings a decade of experience, primarily in Montreal, where he progressed from an FX and environment artist to supervisor at Framestore and DNEG."I am thrilled to join The Yard team in Montpellier," said Chauvet. "Returning to Montpellier, where I studied, and having the opportunity to work on both international and French projects is an incredible chance. I feel fortunate to reconnect with our French craftsmanship and, alongside Bertrand, bring a touch of our experience from some of the world's leading studios.""Having joined The Yard a few months ago, I am delighted to see that my initial experiences with the organization have met my expectations and align perfectly with the ambitions and challenges Ive set for my career," said Cabrol. "With the right people, the necessary support, and the vision driven by the management team, I see even the most ambitious projects becoming achievable.Among other major films, both contributed to Disney's live-action remake of Dumbo. Cabrols other credits include Blade Runner 2049, Deadpool 2, Watchmen, and The Mandalorian. Meanwhile, Chauvet has worked on Star Wars: Skeleton Crew, For All Mankind, Alien Covenant, Ghost in the Shell, Last Night in Soho, and Fantastic Beasts.We are delighted to welcome Eve and Bertrand to The Yard, says Laurens Ehrmann, founder and Senior VFX Supervisor. Their arrival marks a new chapter for The Yard, enabling us to continue developing our studio structure with senior talent in both Paris and Montpellier. I am sure their experience will benefit both our team and future clients.Source: The Yard VFX Journalist, antique shop owner, aspiring gemologistL'Wrenbrings a diverse perspective to animation, where every frame reflects her varied passions.
    0 Commenti ·0 condivisioni ·91 Views
  • Its Much More Than Horseplay for DNEG on Venom: The Last Dance VFX
    www.awn.com
    Going from winning an Oscar for Tenet to working on Venom: The Last Dance, DNEGVFX Supervisor David Lee has certainly seen his share of varied visual effects, everything from invisible VFX to CGI spectacles. For his work on Venom: The Last Dance, Marvel and Sony Pictures final instalment in their Symbiote trilogy starring Tom Hardy and directed by Kelly Marcel, Lee oversaw DNEGs delivery of 500 shots across 11 sequences.The DNEG VFX teams work included creating new characters like the Venom Horse, Xenophage, and green Symbiote as well as familiar ones such as Venom and Wraith Venom. They also handled a dance sequence with Mrs. Chen, a high-altitude battle on top of an airplane, skydiving and a split face conversation between the host and parasite. Its sometimes nice to be matching reality so closely because you have such good references and are trying to make the highest level of output that you can, states Lee. But something like Venom has a subjective look for a lot of these characters, especially the ones were doing in the latest film. I enjoy both types of work.Central to the success of the Venom franchise has been convincing the audience that there is an alien inhabiting the body of a human (Tom Hardy) that now and then forcefully emerges in a physical form. With a lot of these types of effects youre always trying to create so many points of interest that youre not giving the eye long enough to necessarily linger on any one component, remarks Lee. For us, it was looking at how would this really come out of the body. At first you might have some idea of a seepage coming through the skin, so that provides a base layer. The skin itself starts to turn black, showing a bit of subsurface oil running underneath. Then you might get a layer coming on top of it that starts to build and build. Its about increasing the complexity of these layers, so it feels organic. Another key to the franchises success has been the Abbott and Costello-type interplay between Hardy and the Symbiote that adds humor to the proceedings. It is a testament to Tom Hardy and his performances in terms of how he makes it work and voices Venom and has these dialogues, essentially, between himself. There are always multiple versions that we can try to use and make it work. Its an interesting challenge to get that eyeline working on some of these shots because sometimes you dont want it to be quite where Tom might be looking. Adding additional complexity to the work was that not all scenes take place at night. That brought with it a particular set of challenges in terms of how you get something that has only been seen in these dark moody environments working in the full daylight, states Lee. You dont want it to be a black hole in the shot. So, its a fine balance. We would end up taking some creative license with the lighting. Normally if you were out in the desert, youve got one source light, which is the sun. Thats going to give you a defined reflection. But Venom doesnt work like that. Because he is only black then literally Venom is just reflection. We were having to cheat a lot of these little point lights all around him while still trying to allow the environment to motivate that lighting, so it doesnt look like its popping out of the scene. But that then gives us these nice defining reflections that shape him. We would do subtle things like increasing the level of bounce coming off the ground. We would give compositing a whole lot of these lighting passes and that would be where the balancing started. Once we got that look it would be fed back to lighting again so they could adjust it on their end and then everything else begins coming through more out of the box. But there was still a lot of comp work to balance that.Fluid simulations are an essential aspect of Venom Wraith. Wraith is a head with essentially these oily liquid tendrils that seep all of the way over, Lee explains. And even where it joins the head, you have this paint dripping effect that comes down over the top of his face. We altered the tendrils a little bit from previous shows because the technology has changed. But once you get that setup, as long as you have a rigorous base to work on, they flowed through quite seamlessly. Apart from his facial performance, which is driven by the animation, what we do is attach a rudimentary pipe to the head that then drives where the effects tendrils would go. Its almost like a broad transformation tool that then drives the effects work on top of that. Venom is a show that relies heavily on effects. If you saw an early version of Wraith that had a pipe coming out attached to Toms back, your first instinct is to say, Is that how it is? Because that looks terrible. It's making sure that were taking everyone along for the ride so that they understand that these early versions are not necessarily indicative of the final look, but are the place where we want to get direction in terms of does Wraiths neck bend with a softer curve or does it want to be tighter coming over the shoulder because thats where the tendril is going to be coming in.Motion capture was utilized when Venom dances with Mrs. Chen. We had a dancer and choreographer who designed that sequence with Kelly Marcel, remarks Lee. We always looked at Venom as Arnold Schwarzenegger; hes a big man, powerful, a lot of muscle mass, and moves slightly differently. How does that work? Luckily YouTube is a fountain of different people dancing so we referenced that heavily. That meant when the dancer gave us the motion capture data, we would then have to take that and see how we needed to move that into Venoms character. Did we need to slow some things down? There's a stylistic element in terms of trying to get a certain look and feel. Big dancers are still quite light on their feet. Its not necessarily a lumbering giant type of thing. Thats where our expectations laid when we first started looking into it. But we eventually got lighter and lighter, and more fluid throughout the whole sequence. It worked quite nicely with what Kelly was going for. Thats a comical part of the film. Arm extensions aided in getting the proper interaction between the extremely tall and tiny dance partners. These extensions allowed Mrs. Chen to place her hands on his forearm accurately, says Lee. We could keep as much of her performance as we possibly could. For eyelines, which went up another foot or so, we had the same thing. Our dancer had an eyeline helmet on as well, so Mrs. Chen was always looking at and touching the right places. For the first time in the trilogy, Venom takes over a creature rather than a human. In this case, its a horse. Initially, we dialed into Venom, which involved taking all the textured look and put that onto the horse, Lee shares. We thought no matter what body was being taken over by Venom, the process itself is still the same, as well as the movement. You would always finish with the head. But also, visually, its a nice thing to finish with. The horse was shot further away with wider shots, but there was also a lot of motion because its supposed to be running so fast. We actually started to dial down that tight oily spec look that we had on Venom because it looked overly streaky and visually unappealing, almost being confused with noise. We started looking at Arabian horses, which are black and have this beautiful sheen that defines the muscularity and shape. In fact, theyre so black that when theyre backlit they are almost like silhouettes against the environment. We essentially gave Venom Horse a more horsey look.In one sequence, Venom Horse gallops off a cliff. Thats a great homage to the first Venom when Tom Hardy sends his tendrils down to grab the motorcycle. This is the reverse, with the horse shooting up the tendrils. That shot was heavily prevised before we started shooting it. It gave us a great base to work from. Actually, when you look at the previs compared to the finished film, that sequence is coherent compared to where we started. We did shoot elements of Tom to get a facial performance and projected that onto where it was required, such as the shot where hes coming towards the camera. Not all the fights occur on land. In another sequence, as Venom tangles with the Xenophage on top of a flying passenger airplane. The Xenophage went through a lot of versions in terms of how it looked, explains Lee. There is a lot of lizardy homage and an insect-like feel to it. How it moved was a fun challenge for the animation team, which spent a lot of time playing around with different movement styles. By the time we finished the film, there was a lot more of the staccato insect movement. It was difficult to rig because the way the Xenophage was designed meant that there had to be a range of motion that was unnatural for a creature of that construction. The tail splits into three tails. How does that work when its not split? Eventually, we came upon the idea that it's essentially three triangles that can fit in, almost like a Trivial Pursuit board. It forms a whole from the parts. Once we got things like that working, it started to make a lot more sense. That helps with animation as well because they understand the restrictions that you would have with these kinds of creatures.The Xenophage emoted through its tail and head movement. According to Lee, We didnt dive into these deep humanistic emotions. They were more centered within a broader animalistic style. One signature physical trait is the rotating teeth. We would have our build department construct the external teeth, which were rigid. Our effects team would populate the interior of the mouth with these rotating teeth that pop out through these little gummy slits. Its almost like a conveyor belt and then they come back down again. Environments were crucial in driving the action. On top of the plane, we definitely wanted to drive home the speed and drama, says Lee. For Venom, we did additional cloth simulations to push the sense of him being liquid at his core. His entire body, head and skin start to get this flutter like a dog hanging out a window of a fast-moving car. We leaned into things like that; poses and animation were important to make sure that whenever the hand was coming up to take a new pose it was having to fight against this horizontal wind that was coming through. The Xenophage was easier to a point because its digging into the metal so its trying to claw its way up.Generally, every single shot features a CG plane. When the camera goes through the cabin, comes out the window and sees Tom having his Tom Cruise moment, that was a real plane [in the studio], Lee notes. We ended up replacing all of the exterior. Skydiving was hard to properly achieve. Tom was shot in the studio on wires, Lee continues. We would keep his face and arms, and then replace the rest of him. The key with that is getting the speed and intensity of the fabric, which is why we ended up replacing the majority of Tom. We had a lot of fans going when we were shooting, but it just didnt have the terminal velocity that is so fast, with the high frequency flicker to the cloth. Kelly Marcel and Production Visual Effects Supervisor John Moffatts mantra was. Try to capture everything in camera. John Moffatt and I are a big fan of trying to shoot everything at least for reference so you know what it should look like at that time of day in that place, remarks Lee. With the horse sequences, for example, they were all shot on drones following a motorcycle to try to get the speed and pace. But by and large we replaced the entire environment with a CG build to give us more control over the camera. The airplane and skydiving sequences are all CG environment replacements as well when the horse starts running and the beginning of the river battle where the water had to be replaced to give it a sense of rapids that werent there on location. Venom is also running through the forest and causing this wake of broken and falling trees. That was all replaced as well. By the end, the only part that was left was the shoreline.Noting how enjoyable the varied scope of the VFX work was for him and his team, Lee concludes, It was fun for me to work on this kind of film because if youve got an idea about how something can be more exciting or might help the narrative, then everyone is open to those types of ideas which is fantastic. Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    0 Commenti ·0 condivisioni ·95 Views
  • The issue of training data: with Grant Farhall, Getty Images
    www.fxguide.com
    AsChief Product Officer, Grant Farhall is responsible for Getty Images overall product strategy and vision. We sat down with Grant to discuss the issue of training data, rights and Getty Images strong approach.Training dataArtificial Intelligence, specifically the subset of generative AI, has captured the imagination and attention of all aspects of media and entertainment. Recent rapid advances seem to humanize AI in a way that has caught the imagination of so many people. It has been born from the nexus of new machine learning approaches, foundational models, and, in particular, advanced GPU-accelerated computing, all combined with impressive advances in neural networks and data science.One aspect of generative AI that is often too quickly passed over is the nature and quality of training data. It can sometimes be wrongly assumed that in every instance, more data is good any data, just more of it. Actually, there is a real skill in curating training data.Owning your ownGenerative AI is not limited to just large corporations or research labs. It is possible to build on a foundation model and customize it for your application without having your own massive AI factory or data centre.It is also possible to create a Generative AI model that works only on your own training data. Getty Images does exactly this with its iStock, Getty Creative Images, and API stock libraries. These models are trained on only the high-quality images approved for use, using NVIDIAs Edify NIM built on Picasso.NVIDIA developed the underlying architecture. Gettys model is not a fine-tuned version of a foundational model. It is only trained on our content, so it is a foundational model in and of itself Grant Farhall, Getty ImagesGetty produces a mix of women when prompted with Woman CEO, closeup.BiasPeople sometimes speak of biases in training data, and this is a real issue, but data scientists also know that carefully curating training data is an important skill. This is not an issue of manipulating data but rather providing the right balance in the training data to produce the most accurate results. As part of the curation process is getting enough data of the types needed and often with metadata that helps the deep learning algorithms.Particularly the nature of what data exists in the world already and the qualities of that data that can be used to make the most effective generative AI tool. At first glance, one might assume you just want the greatest amount of ground truth or perfect examples possible, but that is not how things actually work in practice.It is also key that the output responses to prompts provide a fair and equitable mix, especially when dealing with people. Stereotypes can be reinforced without attention to output bias.ProvenanceIt is important to know if the data used to build the generative AI model was licensed and approved for this use. Many early academic research efforts scraped the Internet for data since their work was non-commercial and experimental. We have since come a long way in understanding, respecting, and protecting the rights of artists and people in general, and we have to protect their work from being used without permission. As you can hear in this espisode of the podcast, companies such as Getty Images pride themselves on having clean and ethically sourced generative AI models that are free from compromise and artist exploitation. In fact, they offer not only compensation for artists whose work is used as training data but also guarantees, in some cases, indemnifies against any possible future issues over artists rights.The question that is often asked is, Can I use these images from your AI generator in a commercial way, in a commercial setting? Most services will say yes, says Grant Frarhall of Getty Images. The better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? As Getty knows the provenance of every image used to train their model, their corporate customers enjoy fully uncapped legal indemnification.Furthermore, misuse is impossible if the content is not in the training model. Frarhall points out, There are no pictures of Taylor Swift, Travis Kelsey, athletes, musicians, logos, brands, or any similar stuff. None of thats included in the training set, so it cant be inappropriately generated.AI Generator imageRights & CopyrightFor centuries, artists have consciously or subconsciously drawn inspiration from one another to influence their work. However, with the rise of generative AI, it is crucial to respect the rights associated with the use of creative materials.A common issue and concern is copyright and this is an important area, but it is one open to further clarification and interpretation as governments around the world respond to this new technology. As it stands, only a person can own copyright, it is not possible for a non-human to own copyright. It is unclear how open the law is to training on material without explicit permission worldwide, as Generative AI models do not store a copy of the original.However, it is illegal in most contexts to pass off any material in a way that misrepresents, such as implying or stating the work was created by someone who did not. It is also illegal to use the likeness of someone to sell or promote something without their permission, regardless of how that image was created. The laws in each country or territory need to be clarified, but, as a rule of thumb, Generative AI should restricted by an extension of existing laws such as defamation, exploitation, and privacy rights. These laws can come into play if the AI-generated content is harmful or infringing on someones rights.In addition, there are ongoing discussions about the need for new laws or regulations specifically addressing the unique issues raised by AI, such as the question of who can be held responsible for violations using AI-generated content. It is important to note that just because a generative piece of art or music is stated as being approved for commercial use, that does not imply that the training data used to build the model was licensed and respected all contributing artists appropriately.Generative AIThis fxpodcast is not sponsored, but is based on research done for the new Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. In M&E, generative AI has proven itself a powerful tool for boosting productivity and creative exploration. But it is not a magic button that does everything. Its a companion, not a replacement. AI lacks the empathy, cultural intuition, and nuanced understanding of a storys uniqueness that only humans bring to the table. But when generative AI is paired with VFX artists and TDs, it can accelerate pipelines and unlock new creative opportunities.
    0 Commenti ·0 condivisioni ·84 Views
  • Cinema 4D Tutorial: VDB Volume Workflow Hack
    www.thepixellab.net
    Cinema 4D Tutorial: VDB Volume Workflow HackWhats the difference between the RS Volume object and the Volume Loader Object. Have you ever been confused as to why there are two ways to load a VDB? Which one should you use?The VDB Volume Hack That Will Change Your Redshift Workflow Forever!What is the Difference Between the RS Volume and the Volume Loader Object?I found a workflow for VDBs that seems to be WAY better! Am I missing something? Let me know!Well go over how to use VDB Volumes in Cinema 4D and Redshift and how to optimize import, usage, scale, and more. I hope you find it useful!If you want over 2,000 VDBs and VFX Elements check out our store now!Get your VDBs HereWant More Quick Tips?If you want more of these, head to our YouTube channel, leave a comment, and subscribe!Leave a Comment and Subscribe Here
    0 Commenti ·0 condivisioni ·85 Views