How can i connect DroidCam to processing? - android-camera

I'm studying processing in my class.
In class, I used laptop webcam to connect video on processing, which was just
void setup() {
video = new Capture(this,width,height);
But now I want to use my smartphone as a cam, so I installed DroidCam.
While I downloaded DroidCam both on my pc and smartphone, I could see it's connected and working, but I don't know how to change laptop webcam to Droidcam.
I think I have to change this to something else,
video = new Capture(this,width,height);
but I don't know how can I check the names of cameras when there are several cameras connected to my pc.

Related

Hololens 2 audio stream from Desktop

I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.

Stream video from Raspberry Pi 4 while another program uses the pi camera as well

I have a raspberry pi 4 with a camera module on it and a pan-tilt hat.
I've made a project which when started, it uses the feed from the RPI camera, detects a face and center around it. If the person moves, the camera tracks him.
When I run the .py file through the terminal, it works.
Now I want to use it with my PC. Therefore, I need to simultaneously run my project in the background AND to steam the feed to my PC somehow.
From the methods I searched online, I saw that it's possible to use flask and get a URL to use as an IP camera.
My question is, is it possible to stream the camera feed while my projects runs and tracks the face?
Thank you.

Streaming video of wifi access point camera to a remote computer

After spending weeks of searching the forums and trying different approaches, I didn't find a solution for my quite specific problem. I'm thankful for every hint you might provide.
I purchased a Kodak Pixpro 360 camera which offers a view-finder function over wifi (i.e. a live video stream). Now I'm trying to use this camera as a surveillance cam that can be accessed from anywhere (not just the local network). An ODROID will be connected to the camera via wifi and use a second wifi dongle to connect to the LAN. The incoming video stream should be forwarded in real-time to the client (there will be only one at a time). The received 360 degree content is then viewed in an application written in Unity3d.
So far I have managed to grab the cam's MJPEG stream and serve it as JPEGs unsing a NodeJS server. The JPEGs are then rendered via the WWW.LoadImageIntoTexture method. As you might imagine, GET requests for each frame are horribly slow and result in about 0.5 frames per second.
A colleague of mine pointed me to WebRTC and the Janus gateway as a more elegant solution. This simple peer chat uses SocketIO and is working just fine with my webcam but I can not figure out how to change this code to use the video stream coming from the PIXPRO instead of my local device. Rendering the content should be fun, too, as you need a browser for WebRTC and I am not sure how much of that can be embedded in Unity3d.
Unfortunatelly, the camera can not connect to a LAN by itself but rather acts as a wifi access point. This makes all the solutions I found for ip cams obsolete.
I found a similar project that managed to forward their video stream via Janus and WebRTC but I am not sure if and how I can apply their methods.
https://babyis60.wordpress.com/2015/02/04/the-jumping-janus/
UPDATE
Ok guys, I managed to narrow down my problem by myself. The PIXPRO has no RTP support, so I am stuck with the JPEG Stream. Now I am trying to speed up the paparazzo.js implementation that reads the camera's TCP responses and returns JPEGs by searching for the boundary between the frames. These JPEGs are then served via a http response. I would like to speed up this process by using SocketIO to push these frames to the client and render them there.
The strange thing is that the data seems to be just fine on serve side (I get a valid JPEG image when I export it via fs.writeFileSync('testimage.jpg', buffer, 'binary');, but I can't get it to work on client side after I send the image via io.sockets.emit("stream",{image: image});. When I try to display this image in a browser via $("#video").attr("src", "data:image/jpeg;," + data.image);, the image is not parsed corretly. The inspector shows that the video source is updated, but there is only a binary string.
I finally managed to get it done. The binary had to be loaded into a Buffer and sent as a base64 string.
paparazzo.on("update", (function(_this) {
return function(image) {
updatedImage = image;
var vals = new Buffer(image, 'binary');
//fs.writeFileSync('testimage.jpg', vals, 'binary');
io.sockets.emit("image",vals.toString('base64'));
return console.log("Downloaded " + image.length + " bytes");
};
})(this));
On client side I had to use an image tag because canvas solutions didn't seem to work for me.
var image = document.getElementById('image');
socket.on("image", function(info) {
image.src = 'data:image/jpeg;base64,' + info;
});
The browser output was just a test before the actual Unity3D implementation. I tried many Websocket libraries for Unity3D, but the only one that worked on an Android device was the UnitySocketIO-WebsocketSharp project.
Now I could simply convert my base64 image to a byte array and load it into a Texture2D.
socket.On("image", (data) => {
bytes = Convert.FromBase64String (data.Json.args.GetValue(0).ToString());
});
void Update () {
tex.LoadImage (bytes);
}
The LoadImage seems to block the UI threat, though, which slows down my camera control script so I will have to take a look into Unity plugins that can re-write texture pixels on a lower level. Using the Cardboard SDK for Unity worked a work-around for me to get a quite smooth camera control again.

VLC just shows a single picture of the webcam

I would like to show the live video of a Microsoft Studio webcam with a Raspberry Pi. It should be a reading tool for my grandma.
So I tried vlc v4l2:///dev/video0 and I get always just a single picture. After that the system is frozen. You can just plug the power supply out.
I don't know what I'm doing wrong? I also tried a smaller resolution.

How do media browser plugins function?

If I want to use Google Video chat on my browser
I have to download and install a plugin for it to work.
I would like to make a piece of software that creates
some interactions with a video displayed in the browser.
I assume that it might be problematic doing it with one solution
for all the browser, so if I might need to focus on only one browser
lets talk about firefox, although I think the firefox addon SDK
would not let me do a thing as complex as video interaction.
But how does the Google Video chat plugin work for the browsers?
It's only an example for one of those plugins that lets you
do activities (media in this case) with your browser
which are normally impossible.
As I understand it, Google Video Chat uses Flash.
I'm looking for something official-looking to back that up now...
Edit: I think this explains it pretty well.
Flash Player exposes certain audio/video functions to the (SWF) application. But the Flash Player does not give access to the raw real-time audio/video data to the application. There are some ActionScript API classes and methods: the Camera class allows you to capture video from your camera, the Microphone class allows you to capture audio from your microphone, the NetConnection/NetStream classes allow you to stream the video from Flash Player to remote server and vice-versa, the Video class allows you to render video either captured by Camera or received on NetStream. Given these, to display the video in Flash Player the video must be either captured by Camera object or received from remote server on NetStream. Luckily, ActionScript allows you to choose which Camera to use for capture.
When the Google plugin is installed, it exposes itself as two Camera devices; actually virtual device drivers. These devices are called 'Google Camera Adaptor 0' and 'Google Camera Adaptor 1' which you can see in the Flash Player settings, when you right click on the video. One of the device is used to display local video and the other to display the remote participant video. The Google plugin also implements the full networking protocol and stack, which I think are based on the GTalk protocol. In particular, it implements XMPP with (P2P) Jingle extension, and UDP-based media transport for transporting real-time audio/video. The audio path is completely independent of the Flash Player. In the video path: the plugin captures video from the actual camera device installed on your PC, and sends it to the Flash Player via one of the virtual camera device driver. It also encodes and sends the video to the remote user. In the reverse direction, it receives video (over UDP) from the remote user, and gives it to the Flash Player via the second of the virtual camera device drivers. The SWF application running in the browser creates two Video objects, and attaches them to two Camera object, one each for the two virtual video device, instead of attaching it to your real camera device. This way, the SWF application can display both the local and remote video in the Flash application.