Does anyone have sample code for this, or know if it's possible ? I love the service, but recording it first then sending it for processing takes a minimum of 5 seconds, which is too long for real world use.
Using the service found here https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/speech-to-text/websockets.shtml
The last release of the Watson Developer Cloud ios-sdk supports continuous listening and will stream the audio from your microphone to the service using websockets.
Take a look at this example.
Related
I would like to build an app with Flutter that can read data from a Polar H9 or H10 heart rate monitor (Bluetooth).
I must say that I am a bit lost and don't see how to do.
I would appreciate some clues and a little guidance for it.
Thank you very much !
Best Regards
The best way to start with any new BLE would be to look for any kind of documentation describing the process of connecting and accessing the data. I found the official polar SDK on GitHub. It's written for either iOS (Swift) or Android (Java) but might help you anyway.
In case it is not possible to use native libraries with flutter you should start by using a generic BLE scanner such as nRF Connect to find out more about the device and the available BLE services. It could be that nRF Connect already recognizes the Heart Rate Service in case Polar uses the official Service structure. If that's the case you need to read the specification of the Heart Rate Service (Found here).
If they decided to use a custom service or protocol you could use nRF Connect to try to find out something about the device by sending some messages. This is a tedious process of try and error. A BLE sniffer might also help to collect data about existing communications between the sensor and a device.
I'm building a progressive web application (target is smart phones for now). The app needs to be able to access heart rate and heart rate variability, ideally in real-time. While it seems totally asinine, I'm open to using REST calls to some remote server if that is the only way. I'm also fine with restricting the app to only work with certain hardware if necessary. In this case, the ideal hardware would be some sort of earbud that uses optics to scan for heart rate, but at this point, I'm open...
The best that I have thought up is to find a heart rate monitor that converts the direct signal into audio and use the microphone web API. That seems like a lot more work than ideal, so I'm hoping someone has a better idea. Any ideas are welcome. Please, no one downvote anyone if it doesn't solve all my constraints. I've been working on this for a bit and I'm not sure that there is a clean and perfect solution yet. Thanks in advance!
If the sensor can speak Bluetooth, the Web Bluetooth API can perhaps help: https://developer.mozilla.org/en-US/docs/Web/API/Web_Bluetooth_API
https://developers.google.com/web/updates/2015/07/interact-with-ble-devices-on-the-web
How about use a Web Bluetooth that lets you control any Bluetooth Low Energy device like heart rate monitors. It will read the Service Location Characteristics (which tells your where the sensor is placed - which body part) and subscribe to notifications from the Heart Rate Characteristics, meaning you will get an event whenever the device performs a new measurement. Then use a service worker that will define the behavior of the app to mimic native app capabilities like offline support and notifications.
It's like a Physical Web that you can send a link to your website from a Bluetooth beacon to a user's device and with PWA, that link can be to your web app that looks, feels and functions like a native app. Then with Web Bluetooth, you can then speack to the device. Visit this blog post for more details.
I want to have my application receive notifications without forcing the user to be logged in or authenticated. Thank you for your time!
Well, I'm using OneSignal to send notificaitions to the users of my Android app and it's working fine. Moreover, it uses FCM as its base. So, that shouldn't be a problem too. You can segment users thet way you want and send notifications to one or many - upto you. It's all for free and very easy to set up the basics of it. It's here: OneSignal
You can probably use Socket IO for your case to send messages from one device to another.
There are many good to have Socket IO frameworks which can consider for your requirement
Signal R in Android
SignalR is a new library used to add real-time web functionality to your applications. Signal R uses technology such as
web sockets
Event Source,
forever frame
long polling
Signal R is capable of selecting the best from those four technology depending on your internet connection and your application stability.
Signal R is used in application such as
Chatting application
Stock market application
Real-time gaming
Native Socket.IO
Socket.IO provides an event-oriented API that works across all networks, devices and browsers. Its incredibly robust (works even behind corporate proxies!) and highly performant, which is very suitable for multiplayer games or realtime communication.
I'm trying to help this open radio station guys: radioqk.org. However I'm quite new about the topic of streaming and radio servers. I'm quite surprised that all what I found is about a desktop software clients (eg. Sam broadcaster, Butt, Radittcast, DarkSnow...). However they are confusing to configure. So we are trying to embed it on their website to make it easier to stream from any part of the World to any stream server (eg. giss.tv, caster.fm, listen2myradio.com...)
I have read that it's not possible at the moment, because there is no way to make a streaming HTTP PUT request.
However, if I have understood well, it is possible with liquidsoap.fm because its server support the webcast.js protocol, using the following code: https://github.com/webcast/webcaster
On the other hand, I have search php code able to record from microphone to store it on the server. Or maybe it's about HTML5 and its new function getUserMedia()? It seems it was difficult a few months ago, but now it is possible so:
Is there any live-streaming service with the client integrated so it can record from the user's computer microphone / sound card? I mean, is there a similar service like giss.tv able to record from the user's computer microphone / sound card?
If I'm right, IceCast is the most common opensource implementation of radio streaming. Is there any implementation of IceCast able to record from the user's computer microphone / sound card?
By the way, the idea is integrating it in a WordPress server. That's why I have based the search on PHP (I have not found a WordPress plugin able to solve this problem). However it could be done in another language / server to embed it into WordPress afterwards.
Finally, a workaround could be the following article that talk about including on the website a hyperlink to a Java-coded VNC viewer to take a desktop application to the web in 15 Minutes. In the VNC server side would be any of the desktop software available we have talk about above.
Any light about this topic? I'm quite confused about what path I should take...
I have read that it's not possible at the moment, because there is no way to make a streaming HTTP PUT request.
That's correct. In the very near future we'll have Streams support in the Fetch API, which gets around this issue. In the mean time, it isn't possible directly.
As I mentioned in the post you linked to, you can use a binary websocket connection. That's what the Liquidsoap webcast.js uses... a binary web socket, and a server that supports it. Liquidsoap supports their own protocol, so you can use this to then stream to a server like Icecast.
Is there any live-streaming service with the client integrated so it can record from the user's computer microphone / sound card?
I run the AudioPump Web Encoder, which acts as a go-between for web based clients and your servers. The web-based client can be configured in the URL, so the users don't need to do anything. This might meet your needs.
If I'm right, IceCast is the most common opensource implementation of radio streaming. Is there any implementation of IceCast able to record from the user's computer microphone / sound card?
Yes, Icecast is a popular open source server. But, the server itself cannot and should not be what is recording the audio. You wouldn't want to run the server in the same place as you're doing the encoding... otherwise you'd need all your bandwidth there. The encoder is separate from the server so that you can encode in one place, upload one stream, and then the server distribute it to thousands.
I am currently writing an iPhone application that sends and receives JSON data from a remote server to essentially display realtime information. Me and my partner started the project using Google App Engine (Python) for the server-side implementation mostly because it was easy to pick up and seemed suitable for our needs at the time. However, we're only just now starting to see the downsides of the framework for realtime iPhone apps - APNS is not at all supported, and neither is the GAE Channels API. So our only option for displaying the realtime server data on our app is to continuously poll the server, which certainly seems like horrible design.
We'll have to port our server-side code to a new framework. My question is, which one do we use? From numerous searches, I still have yet to find a satisfactory answer.
I should mention that I don't necessarily want the server to send push notifications. I just want to be able to push data to clients in real-time, and then manipulate that data on the iPhone client-side code. We're fine with setting up the framework on a local server if we have to.
Since you don't want to be pull data in background (lett alone if it is even possible)will have to use APNS.
But why switch away from google app angine, you could use an APNS provider like Urban Airship wich provide there on API to connect with.
You'r not even the fist to run into this problem: Apple Push Notifications on Google Appengine
Probably the easiest realtime framework you can use for sending data to iOS clients in real-time is PubNub (http://www.pubnub.com). It's reasonably priced, and it scales to anything you can throw at it. In my experience, it has no problem delivering a message to an end client in under .25 milliseconds (regardless of the number of clients it's being sent to).
Their latest version also supports APNS functionality for when you app isn't in the foreground.
https://github.com/pubnub/objective-c/blob/master/iOS/README_FOR_APNS.md
If you want to create your own APNS server (since you are running on App Engine anyway), there are examples of how to do that using App Engine's new Socket API. I've written a demo python AppEngine application that people might find helpful in this regard.
https://github.com/GarettRogers/appengine-apns-gcm