Speech Recognition and NodeWebkit

Get help using Construct 2

Post » Mon Jun 10, 2013 10:44 pm

Hello,

Is it possible to use the UserMedia speech recognition plugin with the NodeWebKit export such that FinalTranscript is written directly to a txt file. I modified the speech recognition example to do this but the file is not showing the results.

Any insight is greatly appreciated!

Thanks!
B
9
S
3
G
4
Posts: 80
Reputation: 3,273

Post » Tue Jun 11, 2013 5:15 pm

I guess more to the point, is it possible to access the default microphone input, using UserMedia, through NodeWebKit exporter? It does not appear to be possible, but I might be doing it wrong.
B
9
S
3
G
4
Posts: 80
Reputation: 3,273

Post » Sat Sep 14, 2013 4:59 am

Did you ever get anything working? I am playing with annyang.js which uses webkitSpeechRecognition and it is returning true in nodewebkit but still somewhere the speech is not actually getting through to annyang.

I also can't find anything much besides this post refering to speech recognition.
B
1
Posts: 1
Reputation: 127

Post » Tue Sep 17, 2013 7:14 pm

I've had no luck on this. On related topic I was hoping to create a very simple "DJ" application that allows the user to load wav files into triggers (or buttons) and then play these sounds back. However, it seem this is not possible as all sounds must be added before the build process meaning that there exists no way to communicate between NodeWebKit and the Audio plugin in Construct 2.

I put a few posts up about it and got no response so I assume there is no answer to problem unless Ashley, et al. decide to support dynamic loading of sounds.
B
9
S
3
G
4
Posts: 80
Reputation: 3,273


Return to How do I....?

Who is online

Users browsing this forum: gamarros, imhotep22, lennaert, Nimothar, senecaa, yuji567 and 8 guests