While working on a project with the kinect, I had an idea of integrating it onto a web browser directly from the device. I was wondering if someone has done this before or if there exists some form of information that can shed some light.
In More Detail:
I've been dissecting the Kinect Fusion application that is provided with the kinect and I was wondering what it would take to have a browser do a direct to device 3d scanning. I've discovered NaCl which claims that it can run native code, but I don't know how well it would run Microsoft native code (from the Kinect SDK version 2 //what I'm using.) also just looking at NaCl with no prior experience(with NaCl), I currently cannot imagine what steps to take to actually activate the kinect and have it start feeding the image render to the browser.
I know there exists some libraries that allow the kinect to work on other operating systems and was wondering if those libraries would allow me to have a general bitmapping to send the pp::graphics2d stuff for nacl(for the image display), for which I would then need to figure out how to actually present that onto the browser itself then have it run the native code in the background to create the 3d image then save it to the local computer.
I figured "let me tap the power of the stack." I'm afraid of an overflow, but you can't break eggs without making a few omelettes. Any information would be appreciated! If more information is needed, ask and I shall try my best to answer.
This is unlikely to work, as Native Client doesn't allow you to access OS-specific libraries.
Here's a library which uses NPAPI to allow a web page to communicate with the native kinect library: https://github.com/doug/depthjs. NPAPI will be deprecated soon, so this is not a long-term solution.
It looks like there is an open-source library for communicating with the kinect: https://github.com/OpenKinect/libfreenect. It would be a decent amount of work, but it looks like it should be possible to reverse-engineer the protocol from this library and perform the communication in JavaScript, via the chrome.usb apis.
Try EuphoriaNI. The library and some samples are available at http://kinectoncloud.com/. Currently, only the version for AS3 is posted on the site though. The version for the Web, of course, requires you to install a service on your computer (it's either that or a browser plug-in... and nobody likes those :)
Related
In the docs regarding custom accessories, there is a link to what it claims is the firmware source code but this link only points back to the top page for the Android Peripherals and Accessories (no source code). All the pages under "Custom Accessories" give vague instructions on how to connect but no API, libraries or examples. For example, under the Determine accessory mode support section, it claims:
During the initial connection, the accessory should check the version, vendor ID, and product ID of the connected device's USB device descriptor.
How do I initialize a connection and what methods or what libraries would I call to get the version and other info?
No amount of googling has enabled me to find the source code, libraries or examples to anything related to this "ADK" other than a few outdated Arduino pages that also point to bad links. The closest SO question I've found is here and answers also contain broken or piped links.
Is this project dead or something? What is the standard way of communicating with IO via Android these days?
Just following up here as I found what I was after, though not terribly pleased with the result.
The demo code linked in the docs points to an "adk" which appears to be a demo of the Android Open Accessory protocol developed for use on the Arduino ADK board which was intended to interact with Android. The source code can be found here:
https://android.googlesource.com/device/google/accessory/
though it is terribly out of date. You'll have a terrible time trying to get that to compile with modern gradle.
There are a couple of more active communities working with USB and Android:
This one being great, but only for Host Mode (not accessory mode):
https://github.com/mik3y/usb-serial-for-android
There was another slightly less outdated example of how to implement the AOA between two android phones, which I refactored and got working using modern gradle build tools:
https://github.com/topherbuckley/USB-accessory-sample
After seeing how abandoned the project was for so long, I instead focused my efforts on using Android in Host Mode only, but implementing USB Power Delivery on any hardware such that I can use the Android phone in Host Mode and swap the Power Role via USB-PD after initializing the connection. In this way I can avoid the AOA and still get the same end result using modern software/hardware/firmware.
In a client to client situation I would use Chrome's navigator APIs to get a MediaStream and add its tracks to my RTCPeerConnection.
But I would prefer not opening a chrome window on my raspberry Pi to do this.
Are there any easy-to-use bindings in python or node JS to get a MediaStream and send it to a WebRTC peer?
If you only want to use webrtc without your own customization, you can try uv4l or rpi-webrtc-streamer. They provide webrtc solution with built in signalling over websockets.
If you want to use webrtc but your own signalling, you can do proxy over inbuilt signalling. Other solutions like AIORTC or Node PeerConnection or node-webrtc May not be that much easy to use/configure.
My approach would rather be the chrome driver which can be run headless (mainly used for automated UI testing). You can start the browser by the command-line and give it some arguments like --headles and more. Pretty sure there are some nice libraries within python to do so. But maybe that's a wrong approach here.
So there is a similar question already asked, where different approaches where suggested.
If you want a Python implementation of WebRTC, give aiortc a try. It features support for audio, video and data channels and builds upon Python's asyncio framework.
But maybe check out the answers there, hope it helps!
Sounds like Alohacam will do what you're looking for. Uses WebRTC to provide a real-time stream from your Raspberry Pi camera to any web browser (Chrome, Firefox, Safari, Opera, Edge, iOS, and Android). Also includes TURN support (no need to bring your own TURN relay). If you don't mind a closed-source solution that "just works", it may help. (full disclosure: I'm one of the authors -- would love to hear how it works out for you).
I am trying to get the Movesense to work with a Unity BLE asset as originally I thought MS would be simple enough. I have managed to connect to it and subscribed to the "61353090-" starting service and the "34802252-" starting charasteristic. I think I even got some notifications. Now the problem is, that I am not receiving or able to decode any data from there.
I also ended up reading the example codes and found out the complex system the Movesense uses and the "whiteboard", which I am unfamiliar with. I cannot find anything sensible by googling, as whiteboard is a whiteboard :)
Now my questions are:
What should I do to progress? Do I need to write something to the "17816557"?
What is the "whiteboard" actually?
Would it actually be smarter to just make a Unity plugin for the Movesense?
Thank you
Your are quite right that the answer is in the "Whiteboard" component. Whiteboard is the embedded REST framework (Note: it is not over HTTP!) that Movesense uses to implement REST services within as well as inter device (e.g. over UART or BLE). As you can imagine it is not a simple component, so decoding the traffic without Amersports'/Suunto's help is quite a big challenge. The actual BLE layer is simple: one characteristic to each direction (write & notify), the complexity lies in what goes inside that data pipe.
However, if you are trying to use Unity to make a mobile app the situation is not so bad. There has been a prototype of Movesense mobile library integration for Unity (Android) that uses the existing Movesense mobile library. If you ask Movesense team (info (at) movesense.com) they might be able to help you further. For Windows (Unity or plain) there is nothing done (at least not yet) mainly because until Windows 10 there was no official BLE API for Windows.
Full disclosure: I work for the Movesense team
I'm currently investigating the scope of my project and have come across an issue with regards to the platform on which it can operate. The initial goal is to create a cross platform game across html, andriod and ios.
Is this type of application possible? It is important to note that it would require real time(low latency and consistent) interaction between the three platforms.
If so what are some tools I should take advantage of while developing.
We are doing this exact sort of thing using the 3rd party asset within the Unity UI's:
https://www.assetstore.unity3d.com/en/#/content/10872
and custom Socket.IO (http://socket.io/) server implementation. Works like a champ and is totally agnostic wether the client is Unity3D or just a browser.
I am trying to allow people (from a URL) to connect to a calender/contacts from their iPhone, Blackbury or Android phone - what is the best way to do this?
I've had a bit of a read and it seems that CalDAV and CardDAV are the best way to integrate calenders/contacts, but how exactly can I do this? The internet seems to lack a standard way of how you can integrate this into a number of devices.
Which mobile devices support them? And is it possible to just provide a URL and then the calender/contacts just automatically sync!?
All of this assumes you have some sort of Groupware server setup somewhere which acts as the repository for this information.
For opensource you might want to look at a product called Sogo. Apple also do a caldav/carddav server written in python. They expect you to buy a mac server but you can download the code and run it from a pc or linux box. There's a heap of paid-for groupware.
You might want to check out the "opensource" client software written by the same kids who develop Sogo caled funambol. This claims to be x-mobile (all the ones youve mentioned anyway).
The idea behind all the *DAV protocols is that yes everything is done by Uri (this was actually specced by Tim Berners Lee in his draft for the web).
I've just been through this very same process and found only emerging standards, of which *DAV are the de facto ones IMO. HTC use MS active sync on my HD2 to sync my Gmail. Go figure!
Bedework is CalDAV/CardDAV server that allows you to hook your iPhone/iCal calendar and events.
I have used it and it gives you an url to sign in with in you phone calendar. The Bedework is a server you could install on you machine (it is provided with documentation; this is a good point to start with).
Android natively does not support Bedework. In order for Android to support the CalDAV you have to install an application that supports CalDAV, but I do not know if they work with Bedewrok or not.
In the case of android you could try using the CalendarProvider and the ContactProvider. You could refer to this : http://developer.android.com/guide/topics/providers/calendar-provider.html