Is it possible to run ARCore on Raspberry pi 3? - raspberry-pi

Is it possible to run ARCore on Raspberry pi 3? It seems it possible to run Android on Raspberry but it's not clear am I able to run ARCore here.

No, Rasberry pi has no camera and no IMU and even if you would add them yourself, it is highly unlikely that Google would provide a calibration for it.
This calibration is needed to understand how to interprete IMU measurements and camera features and translate them into translation and rotation

Related

Stream video from Raspberry Pi 4 while another program uses the pi camera as well

I have a raspberry pi 4 with a camera module on it and a pan-tilt hat.
I've made a project which when started, it uses the feed from the RPI camera, detects a face and center around it. If the person moves, the camera tracks him.
When I run the .py file through the terminal, it works.
Now I want to use it with my PC. Therefore, I need to simultaneously run my project in the background AND to steam the feed to my PC somehow.
From the methods I searched online, I saw that it's possible to use flask and get a URL to use as an IP camera.
My question is, is it possible to stream the camera feed while my projects runs and tracks the face?
Thank you.

Raspberry pi camera freezing

I have a Raspberry Pi Compute Module 4 computed to Raspberry Pi V2.1 camera. When trying to get camera output the kernel freezes.
my camera output
Also when trying to read from camera using OpenCV I get error:
VIDIOC_STREAMON: Invalid argument
Error: Could not open video
has someone this experience? What is the solution?
Thank you

Unity. Move player when mobile moves (android VR)

i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.

Alexa Raspberry Pi

I have a Raspberry PI model B+ and I was thinking of integrating it to Alexa Voice Service. So I was able to manage my Raspberry PI and Alexa Voice Service until the part that Alexa says hello. In order to achieve this I used also PC108 media USB external sound card. So I’m getting both input and output from my plug-in microphone or my mini jack audio output to speaker. The thing is that something is missing in order to work .What do I have to do in order to make Alexa listen ?
Thank you in advance.
At re:Invent 2016 they had a workshop on doing this. Take a look at the slides from the session and the workshop instructions. We used a simple USB microphone and sound is built into the Pi. The sample app is still being updated so it should be good to go.
This was with a Pi3 but the basics should still be the same.
You can also use PiCroft that is an image of Mycroft a open source assistant it's just burn it on a sdcard and use
https://mycroft.ai/mycroft-now-available-raspberry-pi-image/
if you want to create skills https://docs.mycroft.ai/skill.creation

Is it possible to have the depth camera of the Kinect using windows iot core and raspberry pi 2?

I'm trying to create a new application about facial recognition using the Kinect v1, the raspberry pi 2 and windows iot core, so I did some research and I did find that I can use media capture of the new update windows 10 but what I need to know is, is it possible to use the Kinect v1 with the media capture and if not is there are some other solutions?
Yes, Kinect v1 will work with Windows IoT Core. v1 also works with Raspian or other Linux images if the RPi is your main focus.
Raspian has modprobe in the kernel which allows you to access video media (color, NO depth) - access and use kinect like a webcam.
For depth check out librekinect and look at the gspca_kinect module. the depth_mode flag should give you access to depth data from the kinect