Movesense with Unity BLE plugin - unity3d

I am trying to get the Movesense to work with a Unity BLE asset as originally I thought MS would be simple enough. I have managed to connect to it and subscribed to the "61353090-" starting service and the "34802252-" starting charasteristic. I think I even got some notifications. Now the problem is, that I am not receiving or able to decode any data from there.
I also ended up reading the example codes and found out the complex system the Movesense uses and the "whiteboard", which I am unfamiliar with. I cannot find anything sensible by googling, as whiteboard is a whiteboard :)
Now my questions are:
What should I do to progress? Do I need to write something to the "17816557"?
What is the "whiteboard" actually?
Would it actually be smarter to just make a Unity plugin for the Movesense?
Thank you

Your are quite right that the answer is in the "Whiteboard" component. Whiteboard is the embedded REST framework (Note: it is not over HTTP!) that Movesense uses to implement REST services within as well as inter device (e.g. over UART or BLE). As you can imagine it is not a simple component, so decoding the traffic without Amersports'/Suunto's help is quite a big challenge. The actual BLE layer is simple: one characteristic to each direction (write & notify), the complexity lies in what goes inside that data pipe.
However, if you are trying to use Unity to make a mobile app the situation is not so bad. There has been a prototype of Movesense mobile library integration for Unity (Android) that uses the existing Movesense mobile library. If you ask Movesense team (info (at) movesense.com) they might be able to help you further. For Windows (Unity or plain) there is nothing done (at least not yet) mainly because until Windows 10 there was no official BLE API for Windows.
Full disclosure: I work for the Movesense team

Related

How do I stream a webcam from a raspberry Pi over WebRTC without opening a browser?

In a client to client situation I would use Chrome's navigator APIs to get a MediaStream and add its tracks to my RTCPeerConnection.
But I would prefer not opening a chrome window on my raspberry Pi to do this.
Are there any easy-to-use bindings in python or node JS to get a MediaStream and send it to a WebRTC peer?
If you only want to use webrtc without your own customization, you can try uv4l or rpi-webrtc-streamer. They provide webrtc solution with built in signalling over websockets.
If you want to use webrtc but your own signalling, you can do proxy over inbuilt signalling. Other solutions like AIORTC or Node PeerConnection or node-webrtc May not be that much easy to use/configure.
My approach would rather be the chrome driver which can be run headless (mainly used for automated UI testing). You can start the browser by the command-line and give it some arguments like --headles and more. Pretty sure there are some nice libraries within python to do so. But maybe that's a wrong approach here.
So there is a similar question already asked, where different approaches where suggested.
If you want a Python implementation of WebRTC, give aiortc a try. It features support for audio, video and data channels and builds upon Python's asyncio framework.
But maybe check out the answers there, hope it helps!
Sounds like Alohacam will do what you're looking for. Uses WebRTC to provide a real-time stream from your Raspberry Pi camera to any web browser (Chrome, Firefox, Safari, Opera, Edge, iOS, and Android). Also includes TURN support (no need to bring your own TURN relay). If you don't mind a closed-source solution that "just works", it may help. (full disclosure: I'm one of the authors -- would love to hear how it works out for you).

Can we read heart beat data from wear os using raspberry pi

I have purchased a Ticwatch which is running Android wear OS. I want to read the heart beart data from the device over bluetooth using raspberry pi. I found no resources to do so. But I found a tutorial to do so using Polar H7. Link below:
https://github.com/danielfppps/hbpimon
But the same thing is not doing anything with Ticwatch wear OS.
Can anyone even tell me if this is even possible ?
I haven't done this myself - it's quite likely that nobody has, it's a real corner case - but I have no doubt that it's doable.
Getting the heart rate data on Wear is pretty easy; there's an API to do just that. Here's a SO Q&A with some basic code to do so: How to read Heart rate from Android Wear
Transferring that data to your RasPi is going to be more work, but it's still eminently possible. Both devices support a full Bluetooth stack, but there's no simple API for this, so you'll have to build this piece more-or-less from scratch. On the Android side, a good starting point is Google's Bluetooth Chat sample: https://github.com/googlesamples/android-BluetoothChat
In summary: Anything's possible. Many things are difficult.
I ended up creating my own app on Wear OS. Thanks for all help.

WebChat via WebRTC

We are currently in the middle of a large infrastructure rebuild. We are replacing everything from the CRM to the ERP to the CTI.
We have decided to use WebRTC for the CTI. After working with WebRTC for a bit I really see the promise in this technology and started to think that maybe this is the way we want to go for our Webchat as well.
The premise behind this is to be able to add Voice / Video and Screensharing to our chat feature at some point in time.
Since WebRTC is not supported in Safari IE Edge Etc. I am thinking we may be just slightly ahead of ourselves in using WebRTC for text chat.
One thought would be to build it all out as WebRTC determine if the browser allows as default back to XMPP etc.
I have been researching this on my own and have found some options out there like talky.io but in this rebuild we are focusing on not having any third parties involved in our applications (We have had a couple go bye bye with no warning).
Is there a framework / library / open source project out there that tackles part or all of this task?
Is this task as daunting as I think it is going to be or am I overreacting?
Am I crazy, should be locked in a padded room and use an existing chat service?
talky is built ontop of https://github.com/legastero/stanza.io which includes a jingle/webrtc module
Take a look at the Jitsi project (specifically Jitsi Meet). A public version is running at meet.jit.si that you can try out; it uses webrtc for the voice / video, and Jingle / XMPP for the signaling. It is all open source, so you can be sure you won't lose access if the company goes under or something else bad were to happen. The Jitsi team runs it using the Prosody XMPP server; they make a good combination.

Control Camera desktop application using Gyroscope of Android smartphone

For a project at my university I need to create a Unity3D application on my laptop, in which the camera is stationairy and can be controlled to rotate in any direction using the gyroscope of my Android smartphone (Nexus 5), wirelessly or through usbcable.
I've looked at the possibility of OSC or the Unity Remote 5 App, but up till now haven't found a way that works in order to obtain this result.
Any help or advice would be hugely appreciated - I don't have much experience yet with all this.
Thanks!
If i was going to do this then I would use UNET (Unitys built in multiplayer networking API) and have the rotation sync over LAN.
On the camera I would have a Network Transform and a script to control its rotation based on accelerometer input.
The version on the phone would be the authority and sync it's rotation over the network to the client on the laptop.
Pros: Wireless, fast (over wifi), very little code required to make it work, lots of documentation available online.
Cons: Relies totally on your network situation, you will have to do a lot of trial and error to get a smooth experience (not jerky) i think.
As for getting the tilt input on the device, Unity have a great tutorial here: https://unity3d.com/learn/tutorials/topics/mobile-touch/accelerometer-input
It's pretty straight forward.
Sound like a fun project, good luck!
It's possible to do this via cable, WiFi, Bluetooth 2.0 or Bluetooth 4.0 (BLE). I have implemented what you need in WiFi and Bluetooth 2.0 for my current work.
It's pretty easy to get rotation values and stream them from Android but I don't think you will need to do anything because you can just use this!
https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps&hl=en
So the question is how do you receive the data this is sending on Unity's side. The answer is the UdpClient class.
If you need more reliability because every second student in your uni library is torrenting Mr. Robot and your getting huge lag then I can tell you how to implement the same thing in Bluetooth, which is not super trivial as .NET 2.0 (which unity uses) doesn't support bluetooth libraries, but there are solutions...

Enabling Kinect In a Browser using NaCl

While working on a project with the kinect, I had an idea of integrating it onto a web browser directly from the device. I was wondering if someone has done this before or if there exists some form of information that can shed some light.
In More Detail:
I've been dissecting the Kinect Fusion application that is provided with the kinect and I was wondering what it would take to have a browser do a direct to device 3d scanning. I've discovered NaCl which claims that it can run native code, but I don't know how well it would run Microsoft native code (from the Kinect SDK version 2 //what I'm using.) also just looking at NaCl with no prior experience(with NaCl), I currently cannot imagine what steps to take to actually activate the kinect and have it start feeding the image render to the browser.
I know there exists some libraries that allow the kinect to work on other operating systems and was wondering if those libraries would allow me to have a general bitmapping to send the pp::graphics2d stuff for nacl(for the image display), for which I would then need to figure out how to actually present that onto the browser itself then have it run the native code in the background to create the 3d image then save it to the local computer.
I figured "let me tap the power of the stack." I'm afraid of an overflow, but you can't break eggs without making a few omelettes. Any information would be appreciated! If more information is needed, ask and I shall try my best to answer.
This is unlikely to work, as Native Client doesn't allow you to access OS-specific libraries.
Here's a library which uses NPAPI to allow a web page to communicate with the native kinect library: https://github.com/doug/depthjs. NPAPI will be deprecated soon, so this is not a long-term solution.
It looks like there is an open-source library for communicating with the kinect: https://github.com/OpenKinect/libfreenect. It would be a decent amount of work, but it looks like it should be possible to reverse-engineer the protocol from this library and perform the communication in JavaScript, via the chrome.usb apis.
Try EuphoriaNI. The library and some samples are available at http://kinectoncloud.com/. Currently, only the version for AS3 is posted on the site though. The version for the Web, of course, requires you to install a service on your computer (it's either that or a browser plug-in... and nobody likes those :)