How do Interprefy Captions work?

Interprefy provides live captioning of spoken content for your meeting or event in 30+ languages.

As a visual aid to follow the speech, real-time language interpretation from professional conference interpreters is transcribed into text through AI-powered Automated Speech Recognition (ASR) technology.

Interprefy Captions are generated off the audio speech of each speaker and interpreter, using Automated Speech Recognition (ASR) algorithms powered by Artificial Intelligence (AI).

This technology combination uses speech-to-text processing technology to provide text directly from the words being spoken. Just like interpretation, captions will follow as live transcription slightly after the speaker has delivered their words.

Interprefy Captions is available today for selected clients.

Frequently Asked Questions

Are Interprefy Captions powered by machine translation?

No. Interprefy Captions are not using machine translation tools but are transcribing translated speech from conference interpreters into text, by utilizing Automated Speech Recognition (ASR) technology, powered by Artificial Intelligence.

What are the benefits of Interprefy Captions in comparison to machine translation?

Interprefy Captions are being used in conferences involving simultaneous interpretation and are in sync with the audio interpretation. Because Interprefy Captions are based on professional live audio translation from vetted and subject-savvy conference interpreters, the speech is translated taking cultural aspects, context and tone of voice into consideration.

Are live captions available for events or meetings without simultaneous interpretation?

No, Interprefy Captions are currently only available for meetings and events with simultaneous interpretation.

Which languages are available?

The 31 languages which currently can be captioned are:

Arabic, Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hindi, Hungarian, Italian, Japanese, Korean, Latvian, Mandarin, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Lithuanian, Spanish, Malay, Swedish, Turkish

Can a user select to listen to the floor audio language and read captions in a different language?

No, the live captions a user will see are determined by the selection of the audio language.

What is the delay for captions to appear on screen?

Interprefy Captions can be enabled in two different modes. By default, the text will appear within 4 seconds of the speaker having completed a sentence. If 'instant mode' is activated, text will appear in real-time with instant auto-correction.

Why do I need live captions?

Captions are especially useful for Delegates and Attendees who are for some reason unable to hear what is being said or for those who choose to read rather than listen or those who need visual reinforcement. Example users are:

  • Individuals with hearing impairment who can follow the dialogue in written form.
  • People who wish to follow the discussion but are in a location where another dialogue is taking place.
  • Individuals in a noisy environment like in a café who wish to follow the event even when listening conditions are poor.
  • Those who wish to have a readable feed to back up their understanding of what is being said. For instance, in a chemical conference when complex formulas are being voiced it is sometimes useful to have a readable text feed alongside the spoken words.
  • Those attending (but not contributing) in areas of poor network connection where audio feeds may be unreliable

How do I make sure the captions are accurately reflecting the words of the speaker?

The words and terms spoken by the speaker or interpreter are automatically recognized by AI technology. In order for the system to recognize the speech, good source audio quality is essential. Speaking unclearly, having background noise, or using built-in computer microphones may jeopardize the captioning quality. As with any multilingual meeting, we recommend educating speakers about the importance of high audio quality and clear, precise, and paced speech. Populating the glossary before the event further supports accuracy of the transcription. Example: "Shawn" may be transcribed by AI as "Shaun". This can be prevented by feeding the glossary with the exact name spelling and phonetic pronunciation.

What is the pricing model for adding Interprefy Captions to my meeting or event?

Interprefy Captions are available as a cost option for selected clients. Pricing is dependent on two factors: Amount of languages required and event duration.

Are Interprefy Captions available in an event using the Interprefy Select widget on a third-party platform?

Captions are available on Interprefy Connect, Interprefy Connect Pro and selected third-party platforms. Please connect with an Interprefy representative to discuss availability in your preferred platform.

Are Interprefy Captions available in the Interprefy mobile app?

To date, Interprefy Captions are not yet available in the Interprefy mobile app but are expected to become available later this year.