Problem with cordova plugin TTS



  • I’m tried to make an application with quasar, when i compile it for web it works like a piece of cake. But when i compiled it on cordova i it looks like i don’t have it installed on my plugins …

    I’d installed the cordova plugin tts running cordova plugin add cordova-plugin-tts on terminal like the documentation: https://www.npmjs.com/package/cordova-plugin-tts

    but when i tried to use the variables it keep showing me TTS is undefined or cannot use property .speak of undefined when i call window.TTS.

    Here’s some code:

    
    <script>
    export default {
      methods:{
        talkTo: function(){
          try{
            window.plugin.TTS
            .speak('hello, world!').then(function () {
                alert('success');
            }, function (reason) {
                alert(reason);
            });
          }catch(e){
            alert(e)
          }
        },
        recording: function(){ 
            setTimeout(function(){ 
            let recognition = new window.webkitSpeechRecognition() 
            recognition.lang = 'en-US' 
            recognition.start() 
            recognition.onresult = function(event) { 
                if (event.results.length > 0) { 
                    let transcription = event.results[0][0].transcript; 
                    vm.audioFunctions(transcription) } } },3000) 
        }
      }  
    }
    </script>
    

    There’s two functions here, function record it’s working fine (using only webkit’s on docs: https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), but don’t know how takTo() don’t indentifies TTS plugin. Someone can help me please, this is the last step to get my app read



  • Hello, I’ve worked with cordova’s TTS plugin. For used I created a boot called speech.js:

    import { Loading, QSpinnerAudio, QSpinnerBars } from 'quasar'
    export default async ({ Vue }) => {
      Vue.prototype.$speechTalk = (text) => {
        console.log('Status mic', cordova.plugins.diagnostic.permissionStatus.GRANTED)
        return new Promise((resolve, reject) => {
          cordova.plugins.diagnostic.requestMicrophoneAuthorization(function (status) {
            if (status === cordova.plugins.diagnostic.permissionStatus.GRANTED) {
            // console.log('Microphone use is authorized')
              Loading.show({
                delay: 0,
                spinner: QSpinnerAudio, // ms,
                backgroundColor: 'amber-8'
              })
              window.TTS.speak({
                text: text,
                locale: 'pt-BR',
                rate: 1
              }, () => {
                Loading.hide()
                setTimeout(() => {
                  resolve(true)
                }, 400)
              }, (reason) => {
                reject(reason)
                // alert(reason)
              })
            }
          }, function (error) {
            reject(error)
            console.error(error)
          })
        })
      }
      Vue.prototype.$speechToText = () => {
        return new Promise((resolve, reject) => {
          let SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition
          let recognition = SpeechRecognition ? new SpeechRecognition() : false
          let text = ''
    
          setTimeout(() => {
            Loading.show({
              // delay: 400,
              spinner: QSpinnerBars, // ms,
              backgroundColor: 'amber-9',
              message: 'Aguardando áudio',
              messageColor: 'white'
            })
            recognition.lang = 'pt-BR' // this.voiceSelect
            recognition.start()
          }, 400)
    
          recognition.onresult = (event) => {
            let current = event.resultIndex
            // Get a transcript of what was said.
            let transcript = event.results[current][0].transcript
            // Add the current transcript to the contents of our Note.
            // var noteContent += transcript
            text += transcript
            resolve(text)
          }
          recognition.onend = () => {
            text = ''
            Loading.hide()
          }
        })
      }
    }
    
    

    In this boot I combine the cord TTS and cordova diagnostic (for permissions).

    In the Vue component I call it this way:

    this.$speechTalk('Hello, Im talking to you!')
            .then(() => {
              console.log('finished')
            })
    

    Don’t forget to declare speech.js in quasar.conf.



  • The above form is more optimized and reusable. But basically you can access window.TTS.speak({})



  • @patryckx Incredible! Awesome app dude!
    Perhaps i don’t make it run on my context. Now he shows me “ReferenceError: cordova is not defined”.Of course i have cordova on my project, so you know why this error happens?



  • @luckmenez Are you trying to create your browser (SPA) or cordova (Android and iOS) application?



  • @luckmenez if you are using my boot file to make a cordova application, you need to install cordova diagnostic:

    https://github.com/dpa99c/cordova-diagnostic-plugin



  • @patryckx i’m compilind it from cordova application. This function recording is working in both in SPA and cordova application using the webkit. But i didn’t make it on “text-to-speech”.
    I’ll tried as you said. But i’m still find the same problem …
    It looks like they haven’t been imported on the application (keep showing me TTS is not defined). I followed the docs: https://v0-17.quasar-framework.org/guide/cordova-plugins.html and also referenced it on config.xml, am i mistaken to make these imports?

    OBS: found you on youtube, nice work dude 🙂



  • @luckmenez Ok, understand that the text to speech you will have to toggle. because in the spa you will use the web api and in the cordova app you will have to use the cordova plugin as it will only work on android. Can you understand?



  • @patryckx Understand it. I’m often used the plugins as the docummentations explanations and it works fine, but when import and call TTS it keep showing me “TTS is not defined”. Am i missing somenthing else?



  • @luckmenez To help you, I will make a change to a repository I have in github. as soon as it is available I will send you msg here.



  • @patryckx Nice dude, ansious for that! It’s really the last thing i need to finish my workboard! Thanks so much for the attention! 🙂



  • @luckmenez sorry for the delay, I’m moving to another city.
    So as I mentioned above, I made an example that works in both SPA mode and Cordova mode.

    https://github.com/patrickmonteiro/quasar-speech-to-text-cordova.

    To replicate in your project pay attention to the boot file I created called speechCordova.
    Also, in the cordova plugins installed.
    I changed some permissions also in config.xml inside the src-cordova directory.

    I am using quasar version 1.0, so if you use version 0.17, what I call boot there is a plugin.





  • I’m making these changes dude, i’m give you the feedback!
    Perhaps i’m appreciate so much your help Dude 🙂
    PS: noticed you have a plugin (i18n) can’t find much about this plugin. Is it important?



  • @luckmenez “Internationalization is a design process that ensures a product (a website or application) can be adapted to various languages and regions without requiring engineering changes to the source code. Think of internationalization as readiness for localization.”

    Docs: https://quasar.dev/options/app-internationalization



  • Same problem here. i’ve looked the repository above. It helped me a lot and the recognition have worked fine. But somehow when i ran “quasar build” and then “cordova build” for the apk and install it on my phone, the phone voice ins’t working (only the recognition is working like it have to be).

    This is my first quasar project, i’m i missing somenthing?
    @luckmenez have it worked in your project?



  • Still not find the answer, i make a repo on github (a brand new one to verify where i mistaken in a clean project).

    Link: https://github.com/Luckmenez/speech-api

    @patryckx could you give it a check please?


Log in to reply