I have been searching for how to access the iphone camera using MATLAB and I have found it can be accessed by app called IP CAM with the use of a local network. Yet the solution of IP Cam program existed on apple store isn't working so well for my application because I'm trying to build a real time image capturing program using Iphone's camera and Matlab mobile with later processing (and this method keeps MATLAB busy as long as it display the scene and I still want MATLAB to run in the foreground instead of IP Cam).
So far I've downloaded MATLAB Mobile and the connector and connected the Iphone to MATLAB on my laptop, so is there any one who knows how to access the Iphone's camera on MATLAB Mobile and capture the image so that it can be stored on MATLAB workspace for later processing ? or is there any one who can suggest tutorials any piece of material helping me through this.I'd appreciate your answer very much and thank you in advance.
P.S: if there is a solution on android's devices it's also work for me.
I am not aware of iPhone based solutions but here is an opportunity for android based system. I will suggest you to used Sensor EX. I have used it for sometime to acquire Accelerometer, Gyroscope data set along with live images. This tool has bindings available for MATLAB beside other programming environments. Feel free to ask questions if you cannot figure out that how this system works.
Related
I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.
I am trying to understand the expected performance difference in speed, and RAM&GPU consumption of these two in Unity. Or is it limited to LoadRawTextureData works with compressed textures whist LoadImage can't.
I was asked and I'm developing on iOS, iPadOS, Android and WebGL but I'm looking for a general answer to help me research each further later.
The use case is a user uploading multiple high-res images to the server, and then a mobile client downloading these back again.
We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.
I tried to track motion in Matlab by using this tutorial (http://www.mathworks.com/help/vision/examples/motion-based-multiple-object-tracking.html) and it works fine but it implies video as source to work.
I wanna know if it's possible to track motion by using the same tutorial but in real time by using camera as source!
Everything is possible, just please try to find some stuff by yourself before asking here.
I think you may find the information you need in this link:
http://www.matlabtips.com/realtime-processing/
Alternately, you could of course just store the camera output as a (very short) video and continuously analyse that instead.
As of release R2014a, MATLAB includes support for USB webcams. If you have an older version, or if you want to use a high-end camera, you would need the Image Acquisition Toolbox.
Once you are able to get frames from the camera, you can reuse almost all of the code in the multiple object tracking example. You would only need to rewrite the readFrame function with code to get a frame from the camera.
I am working on a project of mine which requires a webcam and MATLAB. I have a Logitech Webcam, and I dont know if I could talk to it through MATLAB because Im trying to work with images and image processing, so I just want to know if there is a way to find out if the webcam is combatible with matlab or if I need to get some other type of webcam to get the job done, if I do need to get another one, it would be helpful if you suggest a certain cam that is cheap and available around.
Thank you.
I've used several Logitech webcams with the Image Acquisition Toolbox on Windows. You'll find a list of supported hardware here.