Real-time Translation Feature
Boundless Flow has a powerful built-in translation proxy feature that supports calling Large Language Models (such as OpenAI, Ollama, etc.) for high-quality cross-lingual translation. This makes it not only a voice recording tool but also a powerful assistant for cross-lingual communication.
Translation Feature Introduction
After enabling the translation feature, Boundless Flow will translate the recognized speech text into your specified target language in real-time. The translation results will be displayed directly on the interface or injected into your document via the "Cursor-following Injection" feature.
Tip: The translation feature only affects the text that is "displayed/copied/finally injected" and does not change the language parameters of the speech recognition (STT).
Configuring Translation Services
To use the translation feature, you need to configure the following in the settings:
Enable Translation
Check "Enable Translation Output" in the settings.
Select Target Language
Select the language you want to translate into (e.g., English, Japanese, etc.).
Configure API
Enter the API address of the translation service. For example, OpenAI compatible interfaces are usually https://api.openai.com/v1.
Set Model & Key
Enter the model identifier (e.g., gpt-4o-mini) and your API key. This key is only saved locally.
Recommended Translation Model (Ollama)
If you want to run translation locally (offline/LAN), a recommended setup is to pull a translation model via Ollama (see Ollama docs; beginner steps: Appendix B):
ollama pull ZimaBlueAI/HY-MT1.5-1.8
Then configure in Boundless Flow settings:
- Translation API Base URL:
http://localhost:11434/v1 - Translation Model:
ZimaBlueAI/HY-MT1.5-1.8 - Translation API Key: optional for local Ollama
Translation Strategy Selection
Boundless Flow provides two translation strategies to suit different usage scenarios:
- Real-time Translation of Temporary Results: Translates the intermediate results of speech recognition in real-time. This method allows you to see the translation results faster but may be slower and consume more API quota.
- Stream Output (Recommended): Uses a streaming API to output translation results. This method performs better when calling local models (like Ollama) and provides a smoother reading experience.
Note: After enabling the translation feature, the "Real-time Output (continuous append, no rollback)" mode will be automatically disabled to avoid input confusion caused by translation delays.
Copyright(c) ZimaBlueAI
齐码蓝智能(大理市 )有限责任公司