Implementing Text-to-Speech TTS with BARK Using Hugging Faces Transformers library in a Google Colab environment
www.marktechpost.com
Text-to-Speech (TTS) technology has evolved dramatically in recent years, from robotic-sounding voices to highly natural speech synthesis. BARK is an impressive open-source TTS model developed by Suno that can generate remarkably human-like speech in multiple languages, complete with non-verbal sounds like laughing, sighing, and crying.In this tutorial, well implement BARK using Hugging Faces Transformers library in a Google Colab environment. By the end, youll be able to:Set up and run BARK in ColabGenerate speech from text inputExperiment with different voices and speaking stylesCreate practical TTS applicationsBARK is fascinating because its a fully generative text-to-audio model that can produce natural-sounding speech, music, background noise, and simple sound effects. Unlike many other TTS systems that rely on extensive audio preprocessing and voice cloning, BARK can generate diverse voices without speaker-specific training.Lets get started!Implementation StepsStep 1: Setting Up the EnvironmentFirst, we need to install the necessary libraries. BARK requires the Transformers library from Hugging Face, along with a few other dependencies:# Install the required libraries!pip install transformers==4.31.0!pip install accelerate!pip install scipy!pip install torch!pip install torchaudioNext, well import the libraries well be using:import torchimport numpy as npimport IPython.display as ipdfrom transformers import BarkModel, BarkProcessor# Check if GPU is availabledevice = "cuda" if torch.cuda.is_available() else "cpu"print(f"Using device: {device}")Step 2: Loading the BARK ModelNow, lets load the BARK model and processor from Hugging Face:# Load the model and processormodel = BarkModel.from_pretrained("suno/bark")processor = BarkProcessor.from_pretrained("suno/bark")# Move model to GPU if availablemodel = model.to(device)BARK is a relatively large model, so this step might take a minute or two to complete as it downloads the model weights.Step 3: Generating Basic SpeechLets start with a simple example to generate speech from text:# Define text inputtext = "Hello! My name is BARK. I'm an AI text to speech model. It's nice to meet you!"# Preprocess textinputs = processor(text, return_tensors="pt").to(device)# Generate speechspeech_output = model.generate(**inputs)# Convert to audiosampling_rate = model.generation_config.sample_rateaudio_array = speech_output.cpu().numpy().squeeze()# Play the audioipd.display(ipd.Audio(audio_array, rate=sampling_rate))# Save the audio filefrom scipy.io.wavfile import writewrite("basic_speech.wav", sampling_rate, audio_array)print("Audio saved to basic_speech.wav")Output: To listen to the audio kindly refer to the notebook (please find the attached link at the endStep 4: Using Different Speaker PresetsBARK comes with several predefined speaker presets in different languages. Lets explore how to use them:# List available English speaker presetsenglish_speakers = [ "v2/en_speaker_0", "v2/en_speaker_1", "v2/en_speaker_2", "v2/en_speaker_3", "v2/en_speaker_4", "v2/en_speaker_5", "v2/en_speaker_6", "v2/en_speaker_7", "v2/en_speaker_8", "v2/en_speaker_9"]# Choose a speaker presetspeaker = english_speakers[3] # Using the fourth English speaker preset# Define text inputtext = "BARK can generate speech in different voices. This is an example of a different speaker preset."# Add speaker preset to the inputinputs = processor(text, return_tensors="pt", voice_preset=speaker).to(device)# Generate speechspeech_output = model.generate(**inputs)# Convert to audioaudio_array = speech_output.cpu().numpy().squeeze()# Play the audioipd.display(ipd.Audio(audio_array, rate=sampling_rate))Step 5: Generating Multilingual SpeechBARK supports several languages out of the box. Lets generate speech in different languages:# Define texts in different languagestexts = { "English": "Hello, how are you doing today?", "Spanish": "Hola! Cmo ests hoy?", "French": "Bonjour! Comment allez-vous aujourd'hui?", "German": "Hallo! Wie geht es Ihnen heute?", "Chinese": "", "Japanese": ""}# Generate speech for each languagefor language, text in texts.items(): print(f"\nGenerating speech in {language}...") # Choose appropriate voice preset if available voice_preset = None if language == "English": voice_preset = "v2/en_speaker_1" elif language == "Spanish": voice_preset = "v2/es_speaker_1" elif language == "German": voice_preset = "v2/de_speaker_1" elif language == "French": voice_preset = "v2/fr_speaker_1" elif language == "Chinese": voice_preset = "v2/zh_speaker_1" elif language == "Japanese": voice_preset = "v2/ja_speaker_1" # Process text with language-specific voice preset if available if voice_preset: inputs = processor(text, return_tensors="pt", voice_preset=voice_preset).to(device) else: inputs = processor(text, return_tensors="pt").to(device) # Generate speech speech_output = model.generate(**inputs) # Convert to audio audio_array = speech_output.cpu().numpy().squeeze() # Play the audio ipd.display(ipd.Audio(audio_array, rate=sampling_rate)) write("basic_speech_multilingual.wav", sampling_rate, audio_array) print("Audio saved to basic_speech_multilingual.wav")Step 6: Creating a Practical Application Audio Book GeneratorLets build a simple audiobook generator that can convert paragraphs of text into speech:def generate_audiobook(text, speaker_preset="v2/en_speaker_2", chunk_size=250): """ Generate an audiobook from a long text by splitting it into chunks and processing each chunk separately. Args: text (str): The text to convert to speech speaker_preset (str): The speaker preset to use chunk_size (int): Maximum number of characters per chunk Returns: numpy.ndarray: The generated audio as a numpy array """ # Split text into sentences import re sentences = re.split(r'(?<=[.!?])\s+', text) chunks = [] current_chunk = "" # Group sentences into chunks for sentence in sentences: if len(current_chunk) + len(sentence) < chunk_size: current_chunk += sentence + " " else: chunks.append(current_chunk.strip()) current_chunk = sentence + " " # Add the last chunk if it's not empty if current_chunk: chunks.append(current_chunk.strip()) print(f"Split text into {len(chunks)} chunks") # Process each chunk audio_arrays = [] for i, chunk in enumerate(chunks): print(f"Processing chunk {i+1}/{len(chunks)}") # Process text inputs = processor(chunk, return_tensors="pt", voice_preset=speaker_preset).to(device) # Generate speech speech_output = model.generate(**inputs) # Convert to audio audio_array = speech_output.cpu().numpy().squeeze() audio_arrays.append(audio_array) # Concatenate audio arrays import numpy as np full_audio = np.concatenate(audio_arrays) return full_audio# Example usage with a short excerpt from a bookbook_excerpt = """Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do. Once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, "and what is the use of a book," thought Alice, "without pictures or conversations?"So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her."""# Generate audiobookaudiobook = generate_audiobook(book_excerpt)# Play the audioipd.display(ipd.Audio(audiobook, rate=sampling_rate))# Save the audio filewrite("alice_audiobook.wav", sampling_rate, audiobook)print("Audiobook saved to alice_audiobook.wav")In this tutorial weve successfully implemented the BARK text-to-speech model using Hugging Faces Transformers library in Google Colab. In this tutorial, weve learned how to:Set up and load the BARK model in a Colab environmentGenerate basic speech from text inputUse different speaker presets for varietyCreate multilingual speechBuild a practical audiobook generator applicationBARK represents an impressive advancement in text-to-speech technology, offering high-quality, expressive speech generation without the need for extensive training or fine-tuning.Future experimentation that you can trySome potential next steps to further explore and extend your work with BARK:Voice Cloning: Experiment with voice cloning techniques to generate speech that mimics specific speakers.Integration with Other Systems: Combine BARK with other AI models, such as language models for personalised voice assistants in dynamics like restaurants and reception, content generation, translation systems, etc.Web Application: Build a web interface for your TTS system to make it more accessible.Custom Fine-tuning: Explore techniques for fine-tuning BARK on specific domains or speaking styles.Performance Optimization: Investigate methods to optimize inference speed for real-time applications. This will be an important aspect for any application in production because the inference time to process even a small chunk of text, these giant models take significant time due to their generalisation for a vast number of use cases.Quality Evaluation: Implement objective and subjective evaluation metrics to assess the quality of generated speech.The field of text-to-speech is rapidly evolving, and projects like BARK are pushing the boundaries of whats possible. As you continue to explore this technology, youll discover even more exciting applications and improvements.Here is the Colab Notebook. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our80k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Salesforce AI Releases Text2Data: A Training Framework for Low-Resource Data GenerationMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Q-Filters: A Training-Free AI Method for Efficient KV Cache CompressionMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Starter Guide For Running Large Language Models LLMsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Thinking Harder, Not Longer: Evaluating Reasoning Efficiency in Advanced Language Models Parlant: Build Reliable AI Customer Facing Agents with LLMs (Promoted)
0 Комментарии
·0 Поделились
·38 Просмотры