About the reference guide

Yandex SpeechKit is a multi-platform library for integrating speech functionality in your mobile apps with minimal effort. The ultimate goal of SpeechKit is to provide users with the entire range of Yandex speech technologies.

SpeechKit architecture

The SpeechKit library supports several mobile platforms using the same implementation of the basic logic. The differences between platforms are in the platform abstraction layer (recording audio, networking, etc.), API wrappers, and platform-specific components such as GUI implementation. This approach simplifies development for multiple platforms and allows for ideal synchronization of functionality between them.

Mobile platforms differ in their culture and development practices. This affects such aspects as naming of classes and methods, object instantiation, error handling, and so on. We try to minimize these differences while also making sure that SpeechKit fits naturally into the ecosystem of each of the supported platforms.

Languages

SpeechKit lets you run speech recognition (on the server side), speech synthesis (on the server side), and voice activation (on the client side).

Supported languages: Russian, English, Turkish, Ukrainian.

Technologies and components

The library contains components for each of the technologies provided, as well as a GUI (for speech recognition) and service components (for initializing internal mechanisms).

Regardless of which component you choose, you must first configure SpeechKit using YSKSpeechKit.

Configuration and initialization

YSKSpeechKit — A class for configuring and managing SpeechKit.

Before using any of the SpeechKit functionality, you must configure it using configureWithAPIKey: or configureWithAPIKey:andLocationProvider:.

By default, SpeechKit uses geolocation to get the current user coordinates. This can help improve speech recognition quality in some cases, such as when using the YSKRcognitionModelMaps language model. To disable it, pass the configureWithAPIKey:andLocationProvider: method an empty pointer as the second argument (instead of an instance of YSKLocationProvider).

YSKInitializer — An interface for controlling the initialization process.

Initialization is the internal process that SpeechKit uses for initializing internal mechanisms. Initialization may require executing lengthy read operations from permanent memory or network access, and generally takes a significant amount of time. This is why the YSKInitializer has been introduced for performing initialization when it is convenient for the user.

In the current implementation, YSKInitializer sends a request to the server (the “startup request”) and gets a set of parameters and configurations in response (such as the audio format or parameters of the active voice detection algorithm), which are then used during speech recognition.

Note. Users do not have to perform initialization explicitly. If it has not yet been done, SpeechKit initializes itself automatically when the first request for speech recognition or synthesis is received. So YSKInitializer is used mainly in order to speed up the execution of the first request.

YSKInitializer uses the YSKInitializerDelegate interface to notify you when it starts and finishes (with or without errors).

Speech recognition

The YSKSpeechRecognitionViewController class is an iOS view controller that is designed to simplify the integration of SpeechKit speech recognition functionality into an application. YSKSpeechRecognitionViewController returns the string uttered by the user and resolves any problems that occur along the way. YSKSpeechRecognitionViewController manages the entire recognition process, including the user interface for speech recognition, management of the YSKRecognizer and YSKInitializer objects, and so on.

YSKRecognizer — A class for more detailed control of the speech recognition process.

YSKRecognizer is the central component of speech recognition in SpeechKit. YSKRecognizer is intended for single sessions of speech recognition. It manages the entire recognition process, including recording audio, detecting speech activity, communicating with the server, and so on. YSKRecognizer uses the YSKRecognizerDelegate interface for notification of important events in the recognition process, returning recognition results, and notification of errors.

The recognition result is represented by the YSKRecognition class, which is the N-best list of recognition hypotheses, sorted by confidence in descending order. A recognition hypothesis, in turn, is represented by the YSKRecognitionHypothesis class.

Errors that occur during the recognition process are described by the standard YSKError mechanism.

Speech synthesis (text-to-speech)

YSKVocalizer is a class for single sessions of speech synthesis.

YSKVocalizer is the main speech synthesis component in SpeechKit. It manages the entire text-to-speech process, including producing audio, communicating with the server, and so on.

YSKVocalizer uses the YSKVocalizerDelegate interface for notification of the main events in the speech synthesis process, returning synthesis results, and notification of errors.

Voice activation

YSKPhraseSpotter — A class for using voice activation.

It allows continual analysis of an audio stream to detect specific phrases in it. It does not require an internet connection. All computations are performed on the device. To search for phrases in an audio stream, you need a model that contains the pronunciation of these phrases.

To start working with YSKPhraseSpotter, you must specify the model and the object that will receive notifications. After this, you can stop and start phrase detection without re-initialization.

Migrating from version 2.2 to version 2.5

  1. Replace the constant names for the models:

    YSKRcognitionModelFreeform →  YSKRecognitionModelNotes
    
    YSKRcognitionModelGeneral →  YSKRecognitionModelQueries

    Example:

    _recognizer = [[YSKRecognizer alloc] initWithLanguage:_recognizerLanguage model:YSKRecognitionModelGeneral];// Replace with:_recognizer = [[YSKRecognizer alloc] initWithLanguage:_recognizerLanguage model:YSKRecognitionModelQueries];
  2. Add the -recognizerDidDetectSpeechEnd: method to the class implementing the YSKRecognizerDelegate protocol.