• TOWARDSAI.NET
    Unlock the Power of Your AWS Security: A Comprehensive Guide to Protecting Your Cloud Investments
    Unlock the Power of Your AWS Security: A Comprehensive Guide to Protecting Your Cloud Investments 0 like January 7, 2025Share this postLast Updated on January 7, 2025 by Editorial TeamAuthor(s): Rudraksh Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Photo by Alex Kulikov on UnsplashAs businesses continue to migrate their workloads to the cloud, the importance of cloud security cannot beoverstated. With the rise of cyber threats and data breaches, its more crucial than ever to ensure that yourcloud infrastructure is secure and compliant with industry standards.In this comprehensive guide, well explore the top tools and best practices for securing your AWS cloudinvestments. From encryption at rest and in transit to identity and access management, monitoring, and incidentresponse, well cover it all.Encryption is the process of protecting data by converting it into a coded format that can only be deciphered withthe correct key. In the context of cloud security, encryption is essential for protecting sensitive data both intransit and at rest.AWS provides a robust encryption solution through its Key Management Service (KMS). With KMS, you can create,distribute, and manage cryptographic keys to ensure that your data remains confidential.When data is stored on an AWS resource, such as an S3 bucket or an RDS instance, its encrypted by default.However, if you need to store sensitive data outside of the default encryption settings, you can use KMS toencrypt and decrypt Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Reacties 0 aandelen 36 Views
  • TOWARDSAI.NET
    Transform Image Data into Insights with VisualInsights AI Automation
    Author(s): Yotam Braun Originally published on Towards AI. Extracting insights from images can often feel challenging. Whether youre a researcher, an analyst, or simply curious, efficiently analyzing and understanding images is crucial but not always straightforward. This is where VisualInsight comes in.GitHub yotambraun/VisualInsightContribute to yotambraun/VisualInsight development by creating an account on GitHub.github.comChallenges with Traditional Image Analysis MethodsManual Effort: Finding the right tools, writing custom scripts, and working with large datasets often involves significant manual work.Complexity: Navigating advanced algorithms, ML frameworks, or open-source projects can be overwhelming, especially for smaller teams.Storage and Security: Ensuring data is securely stored and easily retrievable adds another layer of complexity.Scaling: Handling larger datasets requires scalable infrastructure, which often involves high overhead.VisualInsight addresses these challenges with a seamless and automated solution for image analysis.Figure 2: Example of the user interface where you can upload imagesAs you can see, the UI helps to simplify the process. You just drag and drop your image no complicated scripts required.Introducing VisualInsightCore IdeaVisualInsight is a Streamlit-based web application that simplifies image analysis using Google Generative AI (Gemini). It incorporates AWS S3 for secure storage of original images and results.Figure 3: Analysis results displayed in the Streamlit applicationBy automating much of the heavy lifting, VisualInsight ensures you spend less time on configuration and more time on innovation.Key ComponentsStreamlit UI: A user-friendly interface for uploading, viewing, and analyzing images.LLM Service (Google Gemini): Advanced text-based insights derived from images.AWS S3 Storage: Secure storage for files and AI-generated analyses.Docker & Terraform: Infrastructure for quick deployments and reproducibility.CI/CD via GitHub Actions: Automated builds, tests, and deployments for reliability.How VisualInsight WorksUpload an ImageDrag and drop a JPG or PNG file onto the application.AI Analysis with Google GeminiThe uploaded image is then passed to the LLMService class, which uses Googles Generative AI (Gemini) to generate descriptive insights about the image content.Figure 4: Further analysis details being displayed to the user3. Storage in AWS S3 Once analyzed, the application uploads both the original image and any analysis results to an S3 bucket for safe-keeping.4. Display Results Insights are displayed in the application interface for immediate feedback.Figure 5: Another view of the analysis interfaceCode HighlightsBelow are some of the core services that power VisualInsight.LLM Service (app/services/llm_service.py)Handles the interaction with Google Gemini for image analysis.import google.generativeai as genaiimport osfrom datetime import datetimefrom PIL import Imagefrom utils.logger import setup_loggerlogger = setup_logger()class LLMService: def __init__(self): genai.configure(api_key=os.getenv('GOOGLE_API_KEY')) self.model = genai.GenerativeModel('gemini-1.5-flash-002') self.prompt = """ Analyze this Image and provide: 1. Image type 2. Key information 3. Important details 4. Notable observations """ def analyze_document(self, image: Image.Image) -> dict: try: logger.info("Sending request to LLM") # Generate content directly with the PIL image response = self.model.generate_content([ self.prompt, image ]) return { "analysis": response.text, "timestamp": datetime.now().isoformat() } except Exception as e: logger.error(f"LLM analysis failed: {str(e)}") raise Exception(f"Failed to analyze document: {str(e)}")Whats Happening Here?I configure our Google Generative AI (Gemini) with an API key.A default prompt outlines the kind of analysis we want.The analyze_document method sends the image to Gemini and returns its text-based analysis.2. S3 Service (app/services/s3_service.py)Uploads files to AWS S3 with timestamped keys and generates presigned URLs for private access.import boto3import osfrom datetime import datetimefrom utils.logger import setup_loggerlogger = setup_logger()class S3Service: def __init__(self): self.s3_client = boto3.client( 's3', aws_access_key_id=os.getenv('AWS_ACCESS_KEY_ID'), aws_secret_access_key=os.getenv('AWS_SECRET_ACCESS_KEY'), region_name=os.getenv('AWS_REGION', 'us-east-1') ) self.bucket_name = os.getenv('S3_BUCKET_NAME') def upload_file(self, file): """Upload file to S3 and return the URL""" try: # Generate unique filename timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') file_key = f"uploads/{timestamp}_{file.name}" # Upload to S3 self.s3_client.upload_fileobj( file, self.bucket_name, file_key ) # Generate presigned URL that expires in 1 hour url = self.s3_client.generate_presigned_url( 'get_object', Params={ 'Bucket': self.bucket_name, 'Key': file_key }, ExpiresIn=3600 ) logger.info(f"File uploaded successfully: {url}") return url except Exception as e: logger.error(f"S3 upload failed: {str(e)}") raise Exception(f"Failed to upload file to S3: {str(e)}")Figure 6: The AWS S3 bucket that stores uploaded images and analysis resultsCore Features:Uses boto3 to interact with AWS S3.Generates a time-stamped key for each file.Creates a presigned URL for private file access without requiring you to open up the entire bucket.3. The Streamlit Application (app/main.py)Provides the user interface for file uploads, analysis initiation, and displaying results.import streamlit as stimport osfrom dotenv import load_dotenvfrom services.s3_service import S3Servicefrom services.llm_service import LLMServicefrom utils.logger import setup_loggerfrom PIL import Image# Load environment variablesload_dotenv()# Setup logginglogger = setup_logger()# Initialize servicess3_service = S3Service()llm_service = LLMService()def main(): st.title("Document Analyzer") uploaded_file = st.file_uploader("Upload a document", type=['png', 'jpg', 'jpeg']) if uploaded_file: # Display image image = Image.open(uploaded_file) st.image(image, caption='Uploaded Document', use_column_width=True) if st.button('Analyze Document'): with st.spinner('Processing...'): try: # Analyze with LLM directly logger.info("Starting document analysis") analysis = llm_service.analyze_document(image) # Upload to S3 for storage logger.info(f"Uploading file: {uploaded_file.name}") s3_url = s3_service.upload_file(uploaded_file) # Display results st.success("Analysis Complete!") st.json(analysis) except Exception as e: logger.error(f"Error processing document: {str(e)}") st.error(f"Error: {str(e)}")if __name__ == "__main__": main()Streamlit handles the UI: file upload, display, button triggers.LLMService and S3Service are orchestrated together to handle the AI query and file upload.Real-time logs inform you of the status and highlight any issues.Running VisualInsight LocallyClone the Repositorygit clone https://github.com/yotambraun/VisualInsight.gitcd VisualInsight2. Environment SetupCreate a .env file at the project root:AWS_ACCESS_KEY_ID=YOUR_AWS_KEYAWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRETAWS_REGION=us-east-1S3_BUCKET_NAME=YOUR_BUCKET_NAMEGOOGLE_API_KEY=YOUR_GOOGLE_GENAI_KEY3. Install Dependenciespip install -r requirements.txt4. Run the Appstreamlit run app/main.pyNavigate to http://localhost:8501 in your browser to start using VisualInsight!Containerization with DockerUse Docker for consistent application performance across environments.Figure 7: AWS ECS used for container orchestrationDockerfile (excerpt):FROM python:3.9-slimWORKDIR /app# Install dependenciesCOPY requirements.txt .RUN pip install -r requirements.txt# Copy application codeCOPY app/ .EXPOSE 8501ENTRYPOINT ["streamlit", "run", "main.py", "--server.port=8501", "--server.address=0.0.0.0"]Steps:Build and Run Locally:docker build -t visualinsight:latest .Run:docker run -p 8501:8501 visualinsight:latestVisit http://localhost:8501 to use the app.Infrastructure as Code with TerraformFigure 8: AWS ECR, storing Docker images for the applicationI use Terraform to create and manage AWS resources: S3, ECR, ECS, and more for deploying the application.Why Terraform?Terraform allows you to define your cloud infrastructure as code. Rather than manually creating AWS resources via the console or CLI, you simply write a configuration file. This ensures that your infrastructure is consistent, version-controlled, and easily replicable across multiple environments.Key Advantages of Using Terraform:Reproducibility: The same configurations can be deployed multiple times without drift.Collaboration: Teams can review Terraform files in Git, allowing for better code reviews and fewer mistakes.Scalability: Quick spin-up of additional resources if your usage grows.Example Variables (infrastructure/terraform/variables.tf)variable "aws_region" { description = "AWS region" type = string default = "us-east-1"}variable "bucket_name" { description = "Name of the S3 bucket" type = string}2. Main Configuration (infrastructure/terraform/main.tf)terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.0" } }}provider "aws" { region = var.aws_region}resource "aws_s3_bucket" "documents" { bucket = var.bucket_name}resource "aws_ecr_repository" "app" { name = "document-analyzer"}resource "aws_ecs_cluster" "main" { name = "document-analyzer-cluster"}# ... ECS Service, Security Groups, Task Definition, etc.Why ECR and ECS?Amazon ECR (Elastic Container Registry): A private registry for storing your Docker images. Instead of relying on Docker Hub or other third parties, ECR keeps your images secure within your AWS account.Amazon ECS (Elastic Container Service): An AWS-native container orchestration service. It manages the scaling and deployment of your containerized application automatically. With Fargate (serverless compute engine for containers), you dont have to worry about provisioning or managing EC2 instances; it abstracts away all the heavy lifting.In Short:ECR stores your built Docker images.ECS pulls those images from ECR and runs them as containers in a scalable manner.3. Deploying via Terraformcd infrastructure/terraformterraform initterraform plan -var="bucket_name=my-visualinsight-bucket"terraform apply -var="bucket_name=my-visualinsight-bucket"Terraform will:Create an S3 bucket.Create an ECR repository.Set up an ECS cluster, tasks, services, IAM roles, and more.Automated CI/CD with GitHub ActionsAutomate the build, test, and deployment process to ensure consistent updates.Your .github/workflows/deploy.yml takes care of:AWS Login: Authenticates with your AWS account using secrets.Docker Build & Push: Builds the Docker image and pushes it to Amazon ECR.ECS Update: Forces a new deployment on ECS to pull the latest image.Figure 9: GitHub ActionsFigure 10: GitHub Actions pipeline for CI/CDSample Deploy Workflow:name: Deploy to AWSon: push: branches: [ main ]jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-east-1 - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build and push Docker image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: document-analyzer IMAGE_TAG: ${{ github.sha }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG - name: Deploy to ECS run: | aws ecs update-service --cluster document-analyzer-cluster --service document-analyzer --force-new-deploymentWhenever you push to main, GitHub Actions will build and deploy your latest changes automatically.Real-World ImpactTime EfficiencyWith AI-driven analysis, theres no need for manual labeling or advanced ML pipeline setup.ScalabilityAWS S3 + ECS means you can handle ever-growing image datasets and traffic without re-architecting.ReliabilityDocker ensures consistent environments; Terraform standardizes infrastructure, and GitHub Actions automates testing and deployment.User-FriendlyStreamlits intuitive UI means non-developers can upload images and see insights in real time.ConclusionVisualInsight takes the guesswork out of image analysis. By combining Streamlit, Google Generative AI (Gemini), AWS S3, Terraform and CI/CD, it delivers a robust, scalable solution thats easy to use and maintain. VisualInsight streamlines the entire workflow so you can focus on making discoveries, not wrestling with infrastructure.Key TakeawaysAutomation reduces manual work and simplifies processes.Infrastructure as Code promotes collaboration and reproducibility.Docker ensures consistency across development and production environments.CI/CD enables fast and reliable updates.Feel free to clone the GitHub Repository and customize it for your own project needs. If you enjoyed this, consider clapping on Medium, sharing with others, or following me for more deep dives into AI and cloud solutions!Thanks for Reading!If you enjoyed this post, please give it a clap. Feel free to follow me onMediumReferencesGoogle Gemini: Googles advanced AI model designed for multimodal data processing, including text, images, and audio.Streamlit: An open-source app framework for creating and sharing data applications using Python.AWS S3: Amazon Simple Storage Service (S3) is an object storage service offering scalability, data availability, security, and performance.Docker: A platform for developing, shipping, and running applications inside containers, ensuring consistency across multiple development and release cycles.Terraform: An open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.GitHub Actions: A CI/CD platform that allows you to automate your build, test, and deployment pipeline.AWS ECR (Elastic Container Registry): A fully managed container registry that makes it easy for developers to store, manage, and deploy Docker container images.AWS ECS (Elastic Container Service): A highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.These references provide detailed information about each component used in the VisualInsight application.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Reacties 0 aandelen 40 Views
  • WWW.IGN.COM
    The New AMD Ryzen 7 9800X3D Is the Best Gaming CPU, and It's Back in Stock on Amazon
    Update: Back in stock for a limited timeIf you're in the process of building out a new gaming PC and you're looking for the best gaming processor, this is it. Right now, the recently released AMD Ryzen 7 9800X3D AM5 desktop processor is back in stock at Amazon at its retail price of $479 shipped. This is the official launch price with zero markup, and it's not bundled with anything you don't want or need. The AMD Ryzen 7 9800X3D is the best gaming processor currently on the market (across both AMD and Intel) and a better choice for gamers than the more expensive Intel Core Ultra 9 285K.Highlight: AMD Ryzen 7 9800X3D Desktop ProcessorAMD's X3D series processors are optimized for gaming. In that respect, they bench better than even the most expensive of AMD's standard lineup of CPUs thanks to AMD's 3D V-Cache technology. Although perfectly capable of handling multitasking, rendering, and creation, the limited number of cores means they aren't the ideal processors for those tasks. At its retail price of $479, the 9800X3D is $110 cheaper than the Intel Core Ultra 9 285K ($589) and $170 cheaper than the AMD Ryzen 9 9950X, even though it outperforms both of them in gaming. Unless you're a staunch Intel fan, or you're still on AM4 and don't want to upgrade all of your components, the 9800X3D is the obvious choice for your next gaming rig.New ReleaseAMD Ryzen 7 9800X3D AM5 Desktop Processor$479.00 at AmazonIn our AMD Ryzen 7 9800X3D review, Jackie Thomas wrote:"The AMD Ryzen 7 9800X3D is extremely powerful in games, which makes it easier to recommend than other recent processors like the Intel Core Ultra 9 285K or Ryzen 9 9900X. Especially if youre building a rig with a powerful graphics card, the 9800X3D is going to be the best way to get the most performance out of whichever GPU you pair it with."Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Reacties 0 aandelen 41 Views
  • WWW.IGN.COM
    Zack Snyder's Rebel Moon Gets Virtual Reality Game From Sandbox VR
    Zack Snyder's Rebel Moon Netflix franchise is getting a virtual reality game adaptation at Sandbox VR locations around the world.Rebel Moon: The Descent is not an adaptation of Snyder's films but an expansion of the universe, which so far amounts to two critically panned films released on Netflix. "The Rebel Moon experience allows you to enter the thrilling sci-fi franchise thats taken the world by storm," Sandbox VR said on its website.Sandbox VR offers virtual reality experiences where multiple players can enter a space and play through games together with motion tracking and imitation guns and such."Explore the world of Daggus and descend through towering skyscrapers, gritty urban streets, and a subterranean mine as you battle against enemy soldiers, spacecraft, and more," reads the synopsis. "Choose which Rebel Fighter best represents your style, equip yourself with futuristic weaponry, and face off against the tyrannical Motherworld."Rebel Moon: The Descent ScreenshotsIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250Rebel Moon gained a lot of momentum ahead of release as the next big project from Justice League director Snyder, even more so when he revealed it was originally pitched as a Star Wars film and looked to spawn its own universe.In a similar vein, Snyder said at this time that a "ridiculous scale" Rebel Moon video game was in the works, though it's unclear if he meant this Sandbox VR game or something else. "This [role-playing game] that were doing that is just literally insane, and so immersive, and so intense, and so huge," Snyder said.Momentum ceased somewhat upon the release of Rebel Moon Part 1: A Child of Fire, however, when it earned poor reviews including a 4/10 from IGN. "Zack Snyder's space opera is let down by a derivative patchwork script, mediocre action sequences and a superficial story," we said.Its sequel received the same rating. "The second part of Zack Snyder's Rebel Moon space opera, The Scargiver, delivers a half-baked conclusion to a well-trodden story with flimsy character studies and lacklustre action."Ryan Dinsdale is an IGN freelance reporter. He'll talk about The Witcher all day.
    0 Reacties 0 aandelen 42 Views
  • THENEXTWEB.COM
    How can Dutch battery startups win big? Focus on supply chain pinch points, says CEO
    Dutch battery startups must innovate at critical pinch points in the supply chain to compete globally, says Kevin Brundish, CEO of Eindhoven-based battery company LionVolt.The comments come at a tough time for Europes battery sector, which has been left reeling following the recent collapse of Northvolt. The Swedish startups gigafactories were perhaps the continents greatest hope for a homegrown battery success story.Northvolts failure serves as a cautionary tale of the immense challenges in scaling battery production, from securing supply chains to managing infrastructure costs and maintaining investor confidence. But building big and building fast isnt the only way to cash in on the battery boom.While other European nations have focused on establishing gigafactories, with varying degrees of success, the Netherlands should leverage its historical strengths to support companies developing next-generation subcomponents, said Brundish.The of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!ASML epitomises this strategy. The Netherlands-based firm is the sole producer of the advanced photolithography machines used by all the worlds biggest chipmakers. Without ASMLs machines, the entire chip supply chain would falter.Its an approach that deep and climate tech startups would do well to emulate, according to Brundish. Focusing on pinch-points enables startups to minimise infrastructure costs while targeting areas with high innovation potential, he said.LionVolt spun out from TNOs Holst Centre in Eindhoven, the Netherlands in 2020. The startup is working on a 3D lithium-metal anode that improves energy transfer in lithium-ion, sodium-ion and, in the future, solid-state batteries.The anodes contain a film made up of billions of solid pillars, creating a patented 3D architecture with a large surface area. Compared to conventional anodes, the ions only have to travel a short distance, which makes charging and discharging much faster.LionVolts anodes can be dropped into the manufacturing processes of existing gigafactories reducing risk and lowering capital requirements. This may be key to survival for startups operating in a highly competitive global battery market.Lionvolt is one of several Dutch companies innovating new ways to build better batteries,triggered by surging demand for EVs and other electronic devices.In 2024, the Dutch ecosystem has shown remarkable progress, particularly in the lithium-ion battery market, said Brundish.LeydenJar, another startup from Eindhoven, is working on silicon anodes that could make lithium-ion batteries hold more charge. Meanwhile, CarbonX, a spinout from TU Delft, has developed an alternative to graphite in batteries. Its made from recycled materials and could help cut dependence on China, which has a chokehold on global supplies of graphite.LionVolts first pilot production line is on track to open in early 2025, with construction well underway and key equipment ordered. The company told TNW that it is now embarking on a Series A funding round as it looks for fresh capital to fuel its expansion plans.While Brundish is optimistic about the trajectory of the Dutch deep tech ecosystem, he stressed the need for further government support and cross-border collaboration.Given the Netherlands relatively small size, establishing closer links with financial institutions, such as deep tech VCs, will enable the rapid focusing of government subsidies alongside VC funding, he said.Public funding must also be deployed more rapidly to nurture promising ecosystems before they lose momentum or migrate elsewhere, he added. Story by Sin Geschwindt Sin is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecos (show all) Sin is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecosystem. He's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. Sin has five years journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. Get the TNW newsletterGet the most important tech news in your inbox each week.Also tagged with
    0 Reacties 0 aandelen 39 Views
  • THENEXTWEB.COM
    Inside the AI startup refining Hollywood one f-bomb at a time
    Hollywood is infamous for celebrity excess, but Tinseltown strictly controls one scandalous indulgence: swearing.Director Scott Mann encountered these constraints after shooting the thriller Fall. Movie giant Lionsgate best-known for the John Wick, Saw, and Hunger Games franchises wanted to release the film in the US. But the studio had big problems. Thirty-six of them, to be precise.They said it had too many f*cks, Mann tells TNW on a video call from LA.All those f-bombs were pushing Fall towards an R rating, which would slash the potential audience. To secure the PG-13 needed to extend the reach, those profanitieshad to go.Easier said than done. Reshoots would cost a bomb and post-production magic couldnt scrub the dirty words. Thankfully, Mann had another trick up his sleeve. Quietly, the British filmmaker had been building a startup called Flawless that develops AI video editing tools. Fall provided a new field test: swapping f-bombsfor gentler epithets.The of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Mann asked the cast to record cleaner verbiage. Once the audio was ready, the Flawless system went to work. The software first converted the actors faces into 3D models. Neural networks then analysed and reconstructed the performances. Facial expressionsand lip movements were synchronised with the new dialogue.The experiment proved successful. All 36 f-bombs were replaced without a trace. Well, nearly all of them. I did one f*ck in the end, Mann says. Im allowed one f*ck, apparently.Satisfied by his restraint, the ratings board gave Fall the coveted PG-13. The film became a sleeper hit, grossing a reported $21 million against a budget of just $3 million. A sequel is now shooting in Thailand.Buoyed by his success, Mann began commercialising the software. The latest iteration is DeepEditor, an AI tool that refines dialogue and performances.The studio systemDeepEditor can trim lines, insert pauses, or re-time delivery. It can even copy and paste performances from one shot to another. All the outputs offer Hollywood-grade 4K resolution, 16-bit colour depth, and ACES colour spaces.Early access applications for the tool are now open. A full product release is slated for the first half of this year.Its already altering where people are shooting, says Mann. And as it extends out, I think its going to completely transform how we make movies.Its also not the only tool that Mann wants to transform movies. Around a decade ago, he began developing another AI system for filmmaking. Like DeepEditor, it began life on a Hollywood set.The big breakAfter progressing through film school, British TV, and short films, Mann got his big Hollywood break in 2014. Lionsgate had offered him the directors chair for the crime thriller Heist. An all-star cast led by Robert De Niro was also on board.Mann relished the experience. It was a complete privilege. We were very close on the movie and really happy with the English language version. But then I saw a foreign translation of the movie.Mann was horrified by the dubbing. His script had been rewritten and the actors gestures had mutated. The culprit, he discovered, pervaded across the industry.The problem stemmed from Hollywoods established translation process. When films are dubbed, the scripts are typically rewritten to fit the original mouth movements. If the new lines still dont match the old gestures, voice actors try to synchronise the two by twisting their delivery in unnatural directions. The results range from amusing to infuriating.Its really bad for the filmmakers and the actors, because its not the authentic representation of their work, Mann says. And as an experience, youre not immersed if its not in synchronicity.Mann began investigating novel dubbing techniques. He explored head scans, but the rendering lacked realism. The dubbing merely moved from one uncanny valley to another.Losing faith in established VFX, Mann started searching beyond the film industry. He soon stumbled upon a promising alternative: Deep Video Portraits.Hollywood meets GenAIUnveiled in 2018, Deep Video Portraits was a big breakthrough for the nascent generative AI sector. The technique enables photo-realistic reanimation of faces using just an input video. Each facial gesture and lip movement can then be synchronised with speech.The life-like results stunned observers including Mann. It blew my mind, he says.Mann reached out to the research team. They agreed to collaborate on a new technical test: making De Niros character speak German.The transformation, Mann says, was like magic.It was really understanding how a certain actor might say a certain line You retain the performance, but you can alter the synchronicity.Expressions are digitally transferred from one person to another. Credit: Kim et al.Mann believed the technique was ideal for Hollywood. To build the idea into a business, he sought advice from Nick Lynes, a tech industry veteran. Together, the duo co-founded Flawless in 2018.The startups first product was TrueSync, a dubbing tool that studios are applying to Hollywood movies. Among them is Venom: The Last Dance, a Marvel blockbuster released last year.Flawless also showcased a sizzle reel of AI-translated trailers at this years Cannes Film Festival. Still, not every client is ready to brag about the results.Threatening actsAs the premiere of Fall approached, Lionsgate became anxious. GenAI was still a novel term back then, but unions were already concerned about the threats to performers. The studio feared the films visual dubbing would spark a backlash.They were going to pull the release if this wasnt cleared up with Screen Actors Guild and there were mega nerves, Mann recalls. But luckily, we had planned for the consent workflows and [rights protections] early on.Flawless built the plan on several pillars. All the data would be legitimately sourced rather than scraped without permission like so many GenAI firms do. Every output would be fully rights-cleared. The acting would remain true to the original performances. Any significant changes would require additional consent.The startup also restricted the systems operations. We often call our models narrow models, says Mann. Theyre large, but theyre focused on a specific aspect and curated for purpose. Theyre very targeted and based on clean data that can be used for that purpose.Flawless presented the plan to the Screen Actors Guild (SAG). They gave it the thumbs up, says Mann. In August 2022, Fall was released theatrically in the US. The film and the dubbing were big successes.Just a few months later, GenAI exploded into the mainstream. The trigger was the November launch of ChatGPT. A wave of image, text and audio generators followed closely in its wake. Suddenly, AIs threats to actors, artists and copyrights had become public concerns.Another takeIn July 2023, the SAG-AFTRA actors union began the longest strike in its history. One of the guilds prime concerns was the threat posed by AI.After months of intense negotiations, the union reached a deal with Hollywoods top companies. Under the agreement, any digital alterations would require explicit consent unless theyre substantially as scripted, performed, and/or recorded.Mann welcomed the terms. They wouldnt curb lip synchronisation for foreign-language dubbing, but would enforce strict consent requirements for any meaningful changes to script or performances.The new rules presented business opportunities for Flawless. By supporting union regulations, the startup hopes to rapidly gain traction in Hollywood.Mann has made rights for actors a central tenet of the Flawless product line. Credit: FlawlessA month after the SAG-AFTRA strike began, Flawless unveiled a new rights management platform. Named the Artistic Rights Treasury (A.R.T.), the system shares AI-generated edits with performers. If the actor approves the changes, they can consent within the app. If they dont like the new versions, they can submit their own takes.A.R.T has now been baked into DeepEditor. Mann believes the blend of AI editing and safeguards creates a unique product. DeepEditor will be the first legitimate enterprise AI solution on the market, he says. Everything else is laced with controversy and rights issues.A better future for Hollywood?Over time, Mann expects GenAI tounleash endless opportunities for filmmakers. He envisions shrinking costs, less drudgery, and lower barriers to entry. If all goes to plan, Hollywood will regain an appetite for originality.The key to this industry thriving is innovating and embracing innovation responsibly, Mann says.Yet even he has lessons to learn about working responsibly. During production for the Fall sequel, Mann has run into a familiar problem.I accidentally have written far too many f*cks again, he sighs. We had to have a conversation: were allowed one f*k so lets use it wisely. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Thomas also reports (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Thomas also reports on developments across the ecosystem. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletterGet the most important tech news in your inbox each week.Also tagged with
    0 Reacties 0 aandelen 39 Views
  • THENEXTWEB.COM
    Swave, the startup building true holographic smart glasses, bags 27M
    In the 1977 Star Wars film A New Hope, theres an iconic scene where the beloved droid R2-D2 casts a beam of light to create a hologram of Princess Leia pleading for the help of Obi-Wan Kenobi.Sadly, almost 50 years on, were not much closer to the true holograms science fiction promised us, let alone the teleportation devices and flying cars.Yes, we have AR and VR headsets like Microsofts HoloLens or Apples Vision Pro, but those simply use transparent screens to give the effect of a hologram. Even Tupacs famous live Coachella performance 16 years after his death was pulled off using a trick of light called Peppers ghost. Nope, not a real hologram folks.Real holograms bend light to create 3D images that hover in the air and are visible from every angle a bit like how Princess Leia was depicted all those years ago. Holography is a burgeoning field, and there are a few companies that have plans to commercialise the technology. One of them is Swave.The of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Swave spun out from Belgiums Imec, one of the worlds foremost research facilities on nanoelectronics, in 2022. The company claims its Holographic eXtended Reality (HXR) display tech is the first to achieve true holography by sculpting lightwaves into 3D life-likeimages.Swave recently secured 27mn in a funding round led by Belgian wealth fund SFPIM and imec.xpand, a deep tech-focused venture capital spinoff from Imec. The fresh capital follows a 10mn seed round in 2023, bringing the startups total raised to 37mn.This round will accelerate Swaves product introductions as we continue to solve the challenges of todays AR experiences through true holography, said Mike Noonen, Swaves CEO.Swaves first product is set to be a pair of lightweight smart glasses that could blow the current state-of-the-art out of the water. The glasses have a special display that uses phase-change materials to steer light and sculpt 3D images that you can see from all angles.The company claims to have developed the worlds smallest pixels (less than 300nm) that help produce clear, high-quality images without straining the eyes. The founders ultimate goal is to create applications that can pass the visual Turing test, where virtual reality is indistinguishable from real-world images.To create full colour, the glasses use a spatial colour system. Instead of using multiple panels or fast switching, it arranges colour filters in a pattern on a single display panel. This system reduces visual artefacts and improves battery life, making the glasses more efficient, said Swave.The company believes the smart glasses, which are still in testing, will deliver a better depth of field and wider field of view than equivalent headsets while being much smaller and lighter.Swaves glasses could also solve some common problems for AR and VR. Users could adjust holograms to their eyesight without the need for bulky gear. They could also dynamically switch their focus and change the distances of digital objects, which would reduce side effects such as nausea, eye fatigue, and headaches.Fuelled by the fresh funding, Swave now has its sights set on a product launch.With Series A funding secured and silicon running at our partner fabs, we are on track to introduce product development kits and soon thereafter production devices, said Dmitri Choutov, Swaves co-founder and COO.Swave is also working on Heads Up Displays (HUDs) for vehicles as well as a so-called spatial light modulator. This device would create holograms without the need for glasses at all. Now thats something that might come close to R2-D2s wizardry or perhaps even better. Story by Sin Geschwindt Sin is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecos (show all) Sin is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecosystem. He's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. Sin has five years journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. Get the TNW newsletterGet the most important tech news in your inbox each week.Also tagged with
    0 Reacties 0 aandelen 39 Views
  • 9TO5MAC.COM
    Satechi unveils new essential OntheGo collection for 2025 [Hands-on]
    As someone whos been using Satechi products for years, Ive always admired their sleek designs and focus on functionality. At this years Pepcom event, I had a chance to go hands-on with their latest OntheGo collection a lineup of premium travel chargers that make staying powered up on the move easier than ever. From power banks with built-in stands to versatile wireless chargers, Satechi continues to deliver practical, beautifully crafted tech accessories.Heres everything you need to know about the new OntheGo lineup:OntheGO Power BankSatechis OntheGo Power Banks (available in 10,000mAh and 5,000mAh capacities) are designed to keep your devices charged no matter where you are.Key features include: Wireless and wired Charging: Supports 15W fast wireless charging for iPhones, AirPods, and Android devices via a magnetic connection. Simultaneous charging: Charge two devices at once using both the wireless pad and the USB-C port. Pass-Through charging: Keep devices powered up even while the power bank itself is charging ideal for long video calls or binge sessions. Built-in adjustable stand: A premium vegan-leather stand offers up to 120 viewing angles for both portrait and landscape modes, perfect for StandBy mode or media playback.OntheGO Qi2 Wireless ChargersSatechis OntheGo Wireless Chargers are designed for travelers who need to power multiple devices at once without compromising on style.Available in 2-in-1 and 3-in-1 configurations, these chargers offer: Qi2 technology for faster, more reliable wireless charging. 15W charging for iPhone and 5W for AirPods. 3-in-1 model includes Apple Watch fast charging (for Series 7 and later). Single cable solution to reduce clutter.These chargers feature a lightweight circular design with premium vegan-leather accents and easily fit into a pocket or carry-on. Adjustable functionality supports StandBy mode, Nightstand mode, and more.Hands-On impressionsAt Pepcom, I had a chance to see these chargers in action, and they feel every bit as premium as youd expect from Satechi. The built-in stands are sturdy and convenient, especially when using features like StandBy mode on iOS. I was particularly impressed by how lightweight and portable the chargers are, making them perfect for frequent travelers. The vegan-leather details give the products a polished, modern look that matches Satechis reputation for stylish accessories.Pricing & availability The OntheGo collection will be available in Q2 2025 on Satechi.net, with prices ranging from $69.99 to $99.99. Customers can sign up for email alerts to be notified once the products are available.Satechis OntheGo collection is shaping up to be a must-have for anyone who needs a reliable, travel-friendly charging solution. Whether youre working remotely or simply want to keep your devices powered up during a long day out, these chargers deliver performance, style, and convenience.Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Reacties 0 aandelen 40 Views
  • 9TO5MAC.COM
    Apple releases beta 2 for visionOS 2.3, tvOS 18.3, and more
    Apple has a wave of new beta software ready. Beta 2 is rolling out now for visionOS 2.3, tvOS 18.3, watchOS 11.3, and more. Beta 2 arrives today for Apples winter releasesApple shipped its last wave of big software updates in mid-December, and followed it up with a fresh array of betas.Now, following a brief hiatus over the holiday break, Apple has released developer beta 2 for many of its platforms. These include:visionOS 2.3tvOS 18.3watchOS 11.3HomePod 18.3plus iOS 18.3, iPadOS 18.3, and macOS 15.3If you have a developer account, you can find each beta 2 update inside your devices Software Update screen in Settings.The first betas for these updates were particularly light on new features.Outside of Genmoji coming to the Mac, the main addition is robot vacuum support inside the Home app.This will ultimately, when compatible devices ship, make it possible to control your robot vacuum not only inside Apples Home app but also via Siri across platforms like HomePod, Apple TV 4K, and more.Heres hoping Apple has some new additions for todays beta 2 releases, whether thats in visionOS 2.3, tvOS 18.3, or the other updates. Well be sure to keep you posted on anything and everything new we discover.Have you installed todays beta 2 updates? Found anything new? Let us know in the comments.Best Apple TV and Home accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Reacties 0 aandelen 39 Views
  • 9TO5MAC.COM
    iOS 18.3 beta 2 now available for developers
    After a three-week break for the holidays, Apple has kickstarted the iOS 18.3 beta train once more. iOS 18.3 beta 2 is now available to developer beta testers ahead of its expected release later this month. iOS 18.3 beta 2 features the build number 22D5040d. Apple hasnt yet released a new public beta, but were expecting that to change as soon as later today.There werent many new features or changes in iOS 18.3 beta 1, and I wouldnt expect iOS 18.3 beta 2 to be any different. The update focuses on bug fixes and performance improvements and includes no new Apple Intelligence features.Robot vacuum support in the Home appTweaked icon for the Image Playground appBug fixes for the Writing Tools API and GenmojiYou can now log in to the Feedback app using Face ID or Touch IDThe icon for the Camera Control menu in the Accessibility page of Settings now supports dark modeIf you spot anything new in iOS 18.3 beta 2, let us know in the comments below, onTwitter @9to5MacandThreads @9to5Mac. Stay tuned for our full hands-on coverage with the new releases right here at9to5Mactoday and throughout the rest of the week.My favorite iPhone accessories: Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Reacties 0 aandelen 40 Views