I am currently working on a project where I need to access a build in camera (software will run on a tablet), stream what the camera is showing, and allow the user to take a picture from the stream. I have a version of what I am trying to accomplish on my laptop with its built in camera working. The major difference is the Laptop is using windows XP the tablet is using windows 7.
Running the software on the tablet I get an exception (with some research it appears that exception is cause by no WIA device found). Is it possible that the built in Camera is not WIA compatible? The device does show in the Device Manager as an USB Camera Device, but unlike the camera on my laptop I can't access it directly. I have to use 3rd party software put in by the tablet maker to get the camera to work.
Has anyone experience similar problems? I have to believe if the tablet maker can do what I need I should be able to do something similar.
There also is the Windows Portable Device API that can access cameras, but that appears to be written in c++, without a .NET wrapper. Does anyone know of a simple tutorial of how I could get .NET to place nice with it? EDIT: Just tried WPD didn't list any devices either. I am beginning to thing this camera doesn't exist.
Any knowledge/ pointers to resources would be appreciated. (So far google has turned up the same few articles, no matter which way I approach the problem)
Turns out my Camera was not WIA compatible. I was able to get the tablet to do what I needed it to do using directshow (actually directshow.net)
Good links if others are trying to do something similar and having similar problems
http://msdn.microsoft.com/en-us/library/dd375454%28VS.85%29.aspx
http://directshownet.sourceforge.net/faq.html
Related
I need to share my desktop on Hololens2 in real time.
All the methods I have found deal with synchronizing the HoloLens2 screen to the PC side, instead of the PC side to HoloLens2. I want to try the Unity plug-in FMETP STREAM, but it doesn't seem to meet my requirements. What should I do to synchronize the PC screen to HoloLens2 in real time?
Please excuse my poor English. I would be grateful if you could answer my question.
FMETP STREAM should satisfy this use case, I find this post the developer replied, this package supports the case desktop Unity app stream the content to the VR headset. I believe it should also work for HoloLens.
Besides, the MixedReality-WebRTC release from Microsoft can help you enable real-time audio/video/data communication with a remote peer. It also meets your requirements. You can get it starts with this doc:Unity library overview
You can modify Hololens Remoting player. It streams desktop app to Hololens in real time.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/platform-capabilities-and-apis/holographic-remoting-player
How can I check the firmware version of a primesense Xtion camera?
I have a couple of these cameras which I suspect have different firmware versions. One works with NiViewer, the other doesn't. Although both are detected as connected to a usb port (I repeat the test on the same usb port). I don't want to flash firmware upgrades directly without knowing the current versions (I recently screwup another camera by just trying different firmwares). Ideally, I'm looking for some app I could run from ubuntu that can show the firmware version of the camera.
Looking at APIs, I found a getFirmwareRevision() call for the Structure SDK but I think that's for the occipital camera only. I've checked OpeneNI2 API and the most similar sounding function I've found is a call to GetFirmwareParams() but I can't see any example that refers to the firmware version, so I suspect that's for a different use.
The idea is to create a HoloLens application that displays a hologram which can then be manipulated through the UWP application run on the desktop. The desktop application would contain various UI elements that manipulate the hologram(ie. a rotation button to turn it 45 degrees) and of course, see the same object as in the HoloLens. Naturally, I arrived at the 240 academy tutorial, but that seems a bit outdated compared to the current version of the Holotoolkit. It also doesn’t really fit my scenario, since I am not sharing between two HoloLens devices, but a desktop and a HoloLens. I figured that that shouldn’t really matter since you are still targeting for the UWP, but I wasn’t sure.
So what I tried so far was trying to edit the example scene “SharingSpawnTest” and target it for the PC to see what would happen, but I thought this wasn’t the way to do it since the project settings are set for Mixed Reality and not a regular UWP application for the desktop.
My question is basically if this is even possible and if so, how do I achieve this? Do I have to create two separate projects, one for the desktop and one specifically for the HoloLens and communicate that way?
Add a NetworkManagerHUD component to your NetworkManager object. Once you add it, type in the HoloLens IP address and connect to it. Currently, this only works for HoloLens to be the host and the desktop to be the client.
While working on a project with the kinect, I had an idea of integrating it onto a web browser directly from the device. I was wondering if someone has done this before or if there exists some form of information that can shed some light.
In More Detail:
I've been dissecting the Kinect Fusion application that is provided with the kinect and I was wondering what it would take to have a browser do a direct to device 3d scanning. I've discovered NaCl which claims that it can run native code, but I don't know how well it would run Microsoft native code (from the Kinect SDK version 2 //what I'm using.) also just looking at NaCl with no prior experience(with NaCl), I currently cannot imagine what steps to take to actually activate the kinect and have it start feeding the image render to the browser.
I know there exists some libraries that allow the kinect to work on other operating systems and was wondering if those libraries would allow me to have a general bitmapping to send the pp::graphics2d stuff for nacl(for the image display), for which I would then need to figure out how to actually present that onto the browser itself then have it run the native code in the background to create the 3d image then save it to the local computer.
I figured "let me tap the power of the stack." I'm afraid of an overflow, but you can't break eggs without making a few omelettes. Any information would be appreciated! If more information is needed, ask and I shall try my best to answer.
This is unlikely to work, as Native Client doesn't allow you to access OS-specific libraries.
Here's a library which uses NPAPI to allow a web page to communicate with the native kinect library: https://github.com/doug/depthjs. NPAPI will be deprecated soon, so this is not a long-term solution.
It looks like there is an open-source library for communicating with the kinect: https://github.com/OpenKinect/libfreenect. It would be a decent amount of work, but it looks like it should be possible to reverse-engineer the protocol from this library and perform the communication in JavaScript, via the chrome.usb apis.
Try EuphoriaNI. The library and some samples are available at http://kinectoncloud.com/. Currently, only the version for AS3 is posted on the site though. The version for the Web, of course, requires you to install a service on your computer (it's either that or a browser plug-in... and nobody likes those :)
I've received a project from someone that includes an Arduino (Uno) board with some sensors and lights with an USB cable and a documented protocol for communicating with this board through a COM port. It works fine with some existing code, but I need to port the whole project to a Windows RT environment using an ARM processor and including the Metro interface for the application. And it's going to be completely rewritten...
First of all, my Windows RT device does have an USB port so it can connect to the board. But the challenge is to communicate with the board to read out the sensors and manipulate the lights and I happen to have problems finding some useful libraries, tutorials or other information about how to make these work together.
This project works fine with other Windows versions, though. I just need something specific for Windows RT/ARM/Metro.
Currently it is not possible to do this on Windows RT, and here is an explanation why. As a work around I am using a standard full screen WPF application in combination with the Surface SDK for touch enable UI components. The obvious disadvantage here is that you cannot publish the app to the store.
I think that we should actually try it on a real machine instead of the rt. The surface rt is basically for documents and the internet.
You'd be better off trying all of this with a Toshiba 2032.
A PDA from about 2003.