Control two motors simultaneously with VESC - stm32

I need to be able to control 2 motors simultaneously using VESC for an RC car. I'm currently using an STM32F407 and VESC firmware version 6.00.58 to do this. Right now, I can get one motor at a time to spin, but I can't figure out how to get both to spin simultaneously at different speeds or directions.
The software has a flag for enabling dual motor support, and we can control two motors, but not simultaneously independently (simultaneous run, independent speed/direction). Based on that I designed around the STM32F407 using the VESC6_Plus schematic:
VESC 6 Plus
I had to remap some of the pins to align with timers - there was only one option to be able to connect two motors. The design works beautifully, I have no problem driving them independently, just having problems with getting the software to run them simultaneously and independently control speed and direction.
Has anyone done this successfully or has anyone got an idea on how to do it?

Related

How to have multiple mouse cursors or to simulate multiple mouse but not moving my cursor in the same computer?

Recently I have some tasks on my computer that have to use the mouse cursor automatically, which are like some automated scripts. However, I only have one computer with two monitors, so I hope to work on one of the screens and then let my automated tasks be doing on the other screen with more than two mouse cursors or even more.
So my question is if it is possible to have multiple mouse cursors in a single computer and I can control these cursors by some programming languages like C, C++ or Python? Or, is it possible to simulate some mouse events on the computer but not moving my only mouse cursor? My operating system can be Windows 10 or Ubuntu 18.04 desktop.
I found this utility: https://www.mousemux.com/
it looks like prefect fit for you. It pairs mouse + keyboard to a user. Each user can work independently. However this is still beta version and it doesn't work with all programs.
A mouse cursor is nothing more than a specific area on screen where you can perform multiple actions. Usually you have a pointer that tells you where the active area is on that specific moment.
The exception is on a touch screen where a cursor is not visible because your finger touching the screen is the cursor position.
If you connect more than one device to control your cursor, only one pointer will be visible, but you might want more than one user operating a computer. More details
For short, yes, it's possible to control more than one cursor at a time, and yes you can run some scripts or some RPA tools to perform that for you.

ocean simulation in Blender vs. Unity3D

I want to make an ocean simulation that is physically accurate.
The height and speed of the waves should be controlled by the keyboard at runtime.
In the ocean, there needs to be a boat that either moves along a path or is controlled by the keyboard.
So far I have made this simulation in Blender:
https://youtu.be/LJ6ncxv-k7w
The problems are as follows:
1. There is no collision with the ocean
2. There are no controllers for the boat's movement
3. I am able to control the waves, but not at runtime
I thought about switching to Unity because the user interface is obviously better, as it is a game engine. I do not want to use Blender's game engine as its future is uncertain at this point.
After reviewing the various Unity water simulation plugins, I came to these conclusions:
1. the buoyancy is great in most of them, such as in Aquas and SUIMONO
2. None of them seems to offer a physically realistic collision with the boat.
3. they do offer wave height control, but not much else as far as wave properties go.
4. Some of the plugins can be combined to get closer to satisfactory results.
My question is:
Should I go with Unity completely?
It seems perfect for my user control needs, but the plugins are lacking in the collision aspect. I came across this video, but no tutorial: https://www.youtube.com/watch?v=T0D_vrYm4FQ
Even if there was one, how could I combine it with the plugins?
Is there a way to build the scene in Blender and then import it into Unity?
Would I be able to control the waves and boat after importing them?
Thank you very much for your time and knowledge.
if you really means an ocean, i suggest you to check out NVIDIA WaveWorks. it's a C library and doesn't have an officially integration with Unity3D, but since you go this far for it, i guess maybe you'll have enough courage to trying make it into a useable plugin yourself.

Unity is showing different physics behaviour while building for different devices

I am working on roulette(Casino Style game) game project in Unity3D.
I am rotating a ball around a wheel and wheel is also rotating on its own axis in fixed update.
I am using transform.RotateAround function to rotate the ball around the wheel and i am also decreasing ball's speed in fixed update.I am assigning a random initial speed to ball within a range such that it always stops on different position each time .
For testing purpose i kept the initial ball speed to constant and check it in unity editor such that every time it rotates it always stops on the same number.
I build this project to android and PC .Though the ball stops on the same number each time in both android and PC build but the result is different in both of them.
For example- Every time ball rotates it stops on number 8 in android and number 20 on Pc each time.
Can somebody please suggest me some ways to obtain same result on different devices?
Why it is happening? Is unity physics behaviour is different in different processor?
and please explain me how to fix that .
Unity has a fixed time step, so that isn't the cause of the differences as one might expect. Physics simulations are incredibly complex things, so I'm not going to pretend that I know exactly why you're seeing differences. However, I would imagine it is to do with floating point precision differences between your computer and a much smaller phone processor.
One way to test this would be to run the simulation on another computer, and compare the results to the both of your current devices.

How prevent large amount of transitions in unity state machine?

I have a unity state machine with four states: idle left, idle up, idle down and idle right.
To transition between these states I had to create 12 transitions. (the white arrows). This already seems unwieldy, but now I need to add 4 more states: running up, running down, running left and running right.
Does that mean I end up with 8 states and 24 transitions running between all of them? That seems very unwieldy to me. What if you need to change something later?
I know I can transition by code, but that doesn't seem to be the recommended way of working.
animaor.Play("runningright");
What would be the recommended way to work with lots of states?
As #Uri Popov said, you should consider using "Blend Trees". They are there for the same purpose. The help blend between multiple similar animations. for example, walk and run animation are similar in a way that they depend on character's movement speed.
Look at the following links to learn more about blend trees. These are only basic but will surely help you with your problem.
Unity - Manual: Blend Trees
Blend Trees - Unity Official Tutorials
When to use a blend tree vs state machine for animations (just another question on gamedev.stackexchange)

How to detect height of iPhone (for use in augmented reality game)?

I'm working on locating an iPhone device in 3D space.
I can use lat/long to detect physical location, I can use the magnetometer to figure out the direction they're facing, and I might be able to use the accelerometer to figure out how their device is oriented, but I can't figure out a way to get height of the device off the floor.
Specifically, I need to know if the user is squatting down, or raising their hand toward the ceiling (a different of about 2 meters/6 feet).
I posted a more detailed description of what I'm trying to do on my blog: http://pushplay.net/blog_detail.php?id=36
I would love any suggestions as to how to even fake this sort of info. I really want the sort of interactivity and movement that would require ducking and bobbing, versus just letting someone sit back and angle the phone -- kind of the way people can "cheat" playing with a Wii...
The closest I could see you getting to what you're looking for is using the accelerometer/magnetometer as an inertial tracker. You'd have to calibrate the user's initial position on startup to a "base" position, then continuously sample the sensors on a background thread to build a movement model. This post talks about boosting the default sample rate of the accelerometer functions so that you can get a pretty fine-grained picture of the user's movements.
I'm not sure this will solve your concern about people simply angling the device to produce the desired action, but you will have to strike a balance between being too strict in interpreting movements and allowing for differences in movement
The CoreLocation stuff gives you elevation aswell as lat/long, so you could potentially use that although there are some significant problems with this:
Won't work well indoors (not a problem for Sat Nav, is a problem for games)
Your users would have to "calibrate" (probably by placing the phone on the floor) each location they use!
In fact, you'd need to start keeping a list of "previously calibrated locations"... which could vary hugely just in one house (eg multiple rooms and floors). Could get in the way of the game.
Can't be used on moving transport (tranes, planes, automobiles... even walking) because elevation changing so frequently.
Therefore I'd have thought that using the accelerometer as a proxy for height is a substantially more preferable route than determining absolute elevation.
I am not intimately familiar with the iphone. But it might require a hardware add-on. (which you probably don't want to do). After thinking on this the only way I know how is through light or more specific laser. You shoot out a laser on the floor and record the time it takes to get back. It's actually not a lot to put this hardware together and I am sure the iphone has connections for peripherals. Unless osmeone can trump me, I say ther eis no way to do that with an image.