I was wondering if there was a way on a raspberry pi to track people, and then communicate with GPIO pins to turn servos so something faces their direction. I know it is possible to track using simulink, but I am not sure how to act on results. Thanks for any responses!
Related
I need to be able to control 2 motors simultaneously using VESC for an RC car. I'm currently using an STM32F407 and VESC firmware version 6.00.58 to do this. Right now, I can get one motor at a time to spin, but I can't figure out how to get both to spin simultaneously at different speeds or directions.
The software has a flag for enabling dual motor support, and we can control two motors, but not simultaneously independently (simultaneous run, independent speed/direction). Based on that I designed around the STM32F407 using the VESC6_Plus schematic:
VESC 6 Plus
I had to remap some of the pins to align with timers - there was only one option to be able to connect two motors. The design works beautifully, I have no problem driving them independently, just having problems with getting the software to run them simultaneously and independently control speed and direction.
Has anyone done this successfully or has anyone got an idea on how to do it?
Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation
I've been trying to use Lego EV3 with a gyro sensor to make it go straight. I've tried many videos, but they don't seem to work. Here's my code
to start off, your multiplier seems a little too off, usually I do something like 2.
You might want to refer to the image in the link below.
Gyro programming
I usually use this as my base gyro programming. If you understand My Blocks, I am basically using that in my programme. All I have to do is to add in the values in terms of direction, speed and distance.
Feel free to ask me if you need further help!
I'm wanting to figure out if a user is not moving at all, walking, or running using the iPhone. I'm not trying to implement a pedometer. I just want to know around about if someone is moving briskly, slowly, or not at all. I don't need mph or anything like that.
I think the accelerometer may be able to do this for me, but I was wondering if someone knows of any tutorials or example code that might be able to point me in the right direction?
Thanks to all that reply
The accelerometer won't do you any good here - it will only capture changes in velocity.
Just track the current location periodically and calculate the speed.
There are no hard thresholds for walking vs. running motion, so you will have to experiment a bit. The AccelerometerGraph sample code should get you started on how to get and interpret accelerometer data.
The Accelerometer is good, but if the user has an iPhone 4 or iPad 2 you should use the gyroscope.
CMMotionManager and Event Handeling Guide - Motion Events
Apple Documentation is the best example you can get!
People have a different bounce in their step between walking and running which can be measured with the accelerometer, but this differs between individuals (what shoes they are wearing, what surface they are upon, what part of the body is attached to the iPhone etc.), and this motion can probably be imitated by shaking the iPhone just right while standing still.
Experiment by recording the two types of acceleration profiles, and then use some sort of pattern matching to pick the most likely profile candidate from the current recorded acceleration data.
I'm creating an iPhone application.
My application needs to detect the movements of the accelerometer and report the case of movement by the user.
In practice I have to constantly check that the coordinates received from accelerometer are equal to the coordinates saved in my movement.
My problem is that its implementation is not simple because I know there are many factors that create difficulties.
Someone knows a tutorial or guide to propose.
I also welcome suggestions ...
Thank you very much
You can use this paper as a starting point.
It shows detection of a user walking, running, walking up or down stairs or standing still. Even though it's based on Android the principle will be the same.