Main Content

speechClient

Interface with pretrained model or third-party speech service

Since R2022b

    Description

    Use a speechClient object to interface with a pretrained speech-to-text model, pretrained text-to-speech model, or third-party cloud-based speech services. Use the object with speech2text or text2speech.

    Note

    Using the Emformer and HiFi-GAN pretrained models requires Deep Learning Toolbox™ and Audio Toolbox™ Interface for SpeechBrain and Torchaudio Libraries. You can download this support package from the Add-On Explorer. For more information, see Get and Manage Add-Ons.

    To interface with third-party speech services, you must download the extended Audio Toolbox functionality from File Exchange. The File Exchange submission includes a tutorial to get started with the third-party services.

    Using wav2vec 2.0 requires Deep Learning Toolbox and installing the pretrained model.

    Creation

    Description

    example

    clientObj = speechClient(name) returns a speechClient object that interfaces with the specified pretrained model or speech service.

    example

    clientObj = speechClient(___,Name=Value) sets Properties using one or more name-value arguments.

    Input Arguments

    expand all

    Name of the pretrained model or speech service, specified as "wav2vec2.0", "emformer", "hifigan", "Google", "IBM", "Microsoft", or "Amazon".

    • "wav2vec2.0" –– Use a pretrained wav2vec 2.0 model. You can only use wav2vec 2.0 to perform speech-to-text transcription, and therefore you cannot use it with text2speech.

    • "emformer" –– Use a pretrained Emformer model. You can only use Emformer to perform speech-to-text transcription, and therefore you cannot use it with text2speech.

    • "hifigan" –– Use a pretrained HiFi-GAN/Tacotron2 model. You can only use HiFi-GAN/Tacotron2 to perform text-to-speech synthesis, and therefore you cannot use it with speech2text.

    • "Google" –– Interface with the Google® Cloud Speech-to-Text and Text-to-Speech service.

    • "IBM" –– Interface with the IBM® Watson Speech to Text and Text to Speech service.

    • "Microsoft" –– Interface with the Microsoft® Azure® Speech service.

    • "Amazon" –– Interface with the Amazon® Transcribe and Amazon Polly services.

    Using the Emformer and HiFi-GAN pretrained models requires Deep Learning Toolbox and Audio Toolbox Interface for SpeechBrain and Torchaudio Libraries. If this support package is not installed, then the function provides a link to the Add-On Explorer, where you can download and install the support package.

    Using the wav2vec 2.0 pretrained model requires Deep Learning Toolbox and installing the pretrained wav2vec 2.0 model. If the model is not installed, calling speechClient with "wav2vec2.0" provides a link to download and install the model.

    To use any of the third-party speech services (Google, IBM, Microsoft, or Amazon), you must download the extended Audio Toolbox functionality from File Exchange. The File Exchange submission includes a tutorial to get started with the third-party services.

    Data Types: string | char

    Output Arguments

    expand all

    Client object to use with speech2text to transcribe speech in audio signals to text, or with text2speech to synthesize speech signals from text.

    Properties

    expand all

    Segmentation of the output transcript, specified as "word", "sentence", or "none".

    This property applies only to the wav2vec 2.0 pretrained model and the Amazon speech service.

    • "word"speech2text returns the transcript as a table where each word is in its own row. This is the default for the wav2vec 2.0 pretrained model.

    • "sentence"speech2text returns the transcript as a table where each sentence is in its own row. The wav2vec 2.0 pretrained model does not support this option.

    • "none"speech2text returns a string containing the entire transcript. This is the default for the Amazon speech service.

    Data Types: string | char

    Hardware resource for execution, specified as one of these values:

    • "cpu" — Use the CPU.

    • "gpu" — Use the GPU. This option requires Parallel Computing Toolbox™.

    This property applies only to the Emformer and HiFi-GAN pretrained models.

    Data Types: string | char

    Beam width for speech-to-text beam search decoding, specified as a nonnegative integer. A higher beam width means the decoder keeps track of more text hypotheses at each time step. Increasing the beam width may lead to more accurate predictions, at the expense of being more computationally expensive.

    This property applies only to the Emformer pretrained model.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Include timestamps of transcribed speech in the transcript, specified as true or false. If you specify TimeStamps as true, speech2text includes an additional column in the transcript table that contains the timestamps. When using the wav2vec 2.0 pretrained model, the speech2text function determines the timestamps using the algorithm described in [2].

    This property applies only if you set the Segmentation property to "word" or "sentence".

    Data Types: logical

    Connection timeout, specified as a nonnegative scalar in seconds. The timeout specifies the time to wait for the initial server connection to the third-party speech service.

    This property applies only to the third-party speech services.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Object Functions

    resetReset states for streaming-enabled speech clients

    Note

    For the third-party speech services, you can configure server-specific options using the following functions. See the documentation for the specific service for option names and values.

    setOptionsSet server options
    getOptionsGet server options
    clearOptionsRemove all server options

    Examples

    collapse all

    Download and install the pretrained wav2vec 2.0 model for speech-to-text transcription.

    Type speechClient("wav2vec2.0") into the command line. If the pretrained model for wav2vec 2.0 is not installed, the function provides a download link. To install the model, click the link to download the file and unzip it to a location on the MATLAB path.

    Alternatively, execute the following commands to download the wav2vec 2.0 model, unzip it to your temporary directory, and then add it to your MATLAB path.

    downloadFile = matlab.internal.examples.downloadSupportFile("audio","wav2vec2/wav2vec2-base-960.zip");
    wav2vecLocation = fullfile(tempdir,"wav2vec");
    unzip(downloadFile,wav2vecLocation)
    addpath(wav2vecLocation)

    Check that the installation is successful by typing speechClient("wav2vec2.0") into the command line. If the model is installed, then the function returns a Wav2VecSpeechClient object.

    speechClient("wav2vec2.0")
    ans = 
      Wav2VecSpeechClient with properties:
    
        Segmentation: 'word'
          TimeStamps: 0
    
    

    Create a speechClient object that uses the Emformer pretrained model.

    emformerSpeechClient = speechClient("emformer");

    Create a dsp.AudioFileReader object to read in an audio file. In a streaming loop, read in frames of the audio file and transcribe the speech using speech2text with the Emformer speechClient. The Emformer speechClient object maintains an internal state to perform the streaming speech-to-text transcription.

    afr = dsp.AudioFileReader("Counting-16-44p1-mono-15secs.wav");
    txtTotal = "";
    while ~isDone(afr)
        x = afr();
        txt = speech2text(x,afr.SampleRate,Client=emformerSpeechClient);
        txtTotal = txtTotal + txt;
    end
    
    txtTotal
    txtTotal = 
    "one two three four five six seven eight nine"
    

    Create a speechClient object that uses the HiFi-GAN/Tacotron2 pretrained model. Set ExecutionEnvironment to "gpu" to use the GPU when running the model.

    hifiganSpeechClient = speechClient("hifigan",ExecutionEnvironment="gpu");

    Call text2speech on a string of text with the HiFi-GAN/Tacotron2 speechClient object to synthesize the speech signal.

    [x,fs] = text2speech("hello world",Client=hifiganSpeechClient);

    Listen to the synthesized speech.

    sound(x,fs)

    References

    [1] Baevski, Alexei, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. “Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations,” 2020. https://doi.org/10.48550/ARXIV.2006.11477.

    [2] Kürzinger, Ludwig, Dominik Winkelbauer, Lujun Li, Tobias Watzel, and Gerhard Rigoll. “CTC-Segmentation of Large Corpora for German End-to-End Speech Recognition.” In Speech and Computer, edited by Alexey Karpov and Rodmonga Potapova, 12335:267–78. Cham: Springer International Publishing, 2020. https://doi.org/10.1007/978-3-030-60276-5_27.

    Version History

    Introduced in R2022b