Change parameters during robot learning using simulators (Webots, Gazebo, etc) - simulation

I am searching a simulator for my robot learning research.
In the learning process, I need to change parameters of both environment (friction coefficients, terrain height in the world) and robot itself (mass, inertia).
How can simulators like Gazebo and Webots realize it?
(another problem: bisides physics engine, I also need visual reality for computer-vision-aided algorithms.
Is there any simulator that could provide both functions? )

Webots allows you from a supervisor program to easily change any parameter of a simulation (including friction coefficients) while it is running. Moreover it has a VR interface. I don't know about Gazebo.

Related

Sumo simulator for urban traffic simulation

For urban traffic simulator, we can use Sumo simulator with other simulators like Omnet++ or Matlab or Ns2/3.
I know Sumo can model mobility and other simulators coupled to Sumo for communication protocols or communication networks.
While it is possible to simulate VANET with just using Matlab.
What is the difference between them (SUMO and others or just using Matlab)?
How can we find which is better?
Thank you
It really depends on how much influence the traffic situation has to your scenario. If you are just interested in checking whether your protocol works even if two vehicles drive at 200 km/h in opposite directions but there is no interaction with other vehicles, you do not need SUMO. But if your scenarios involve jams or complex junctions and you want (more or less) realistic trajectories for interacting vehicles you are better off with a traffic simulation like SUMO (especially if you want to run on real world scenarios importing data from OpenStreetMap etc.).

aligning omnet++ clock with the system clock

I'm trying to integrate omnet++ with a 3d robot simulator, and this is roughly what I'm picturing.
So There are a number of objects in the robot simulator, and they communicate with each other using 802.11 which will be simulated by omnet++. Each node in omnet++ corresponds to each object in the robot simulator, and an object's movement will be synchronized with the corresponding node in omnet++.
But since omnet++ is a discrete event simulator, I need to deal with the clock mismatch problem between omnet++ and the robot simulator.
I know omnet++ has cRealTimeScheduler class for synchronizing simulation clock to wall clock, but I'm not sure if this will do what I want.
I'm a noob when it comes to network simulation, so I want to know if this is even possible or not. Does using cRealTimeScheduler class take care of clock synchronization? or do I need to take a different tack? (a different scheduler, or even a different simulator?)
Any help will be greatly appreciated. Thank you.
If the robot simulator itself is running in real-time, then you are fine with the cRealTimeScheduler approach. cRealTimeScheduler will synchronize with the wall clock time. If the robot simulator is also running in real-time then the two will be implicitly synchronized, too.
If the robot simulator has its own simulation time (i.e. can run faster than real time) then you should create your own scheduler class that synchronizes the two simulation. This is called co-simulation where two simulation in tandem. Veins (sumo + omnet) is also doing this where Sumo (car traffic simulator) and omnet (network simulator) is working together.
What you are trying to achieve is possible, however I'm not familiar with the robot simulator part, but as long as the other simulator is also communicating with messages are discrete time points, and you can get the simulation time from the robot simulator, you should be fine.

Is it possible to develop a virtual world that is accessible via both a PC and a virtual reality headset?

For example, if you have a virtual reality headset, you can interact with this virtual world in VR (i.e. WebVR); however, if you don't have a VR headset and/or WebVR compatibility can still access and explore this virtual world (i.e. like Runescape) and interact with characters, whether they are VR or web in the same virtual world?
The A-Frame framework handles that for you automatically, or you can roll it yourself, if you're using another framework. Either way, the different control schemes require a fair amount of thought.
You could also take a look at React360 (https://facebook.github.io/react-360/), it is a WebVR based framework from Facebook and can handle most 3D media out of the box. It performs quite well and has the advantage of progressive enhancement, i.e. if you view it from a desktop/tablet you get a 2D experience, if you are on a 3D capable device you get a full VR experience.
It's also cross platform so will run on android/iOS/Windows/MacOS/Oculus/Vive. Samples are included with it which should be enough to give you an idea of its capabilities.
Depending on the complexity of the game you are trying to develop and the graphical control required A-Frame is another option to look at.

Newbie: Basic Communication using Simulink and USRP2 devices

I would like to start a semester project related to Matlab Simulink and USRP devices. I am new in this field and studying regularly about it...
The first step to setup the devices is completed and now I would like to check if both device can communicate properly. For this Reason can any one suggest a simple Communication Module...
anything would be OK to start with. e.g sending text, Image, Voice, Video etc etc...
Regards
I suggest you take a look at the communications toolbox in matlab:
USRPĀ® Support Package from Communications System Toolbox
There seem to be some code snippets for simulink available as well.
BR
Magnus

External device input

I am looking into what's the best method for getting external data (custom built hardware) and to intercept and process this data (programming language / tool), the cheapest and easiest and with the least learning curve.
Background:
I am a web dev.
External device will be switches, motion detection, velocity detection
Programming language: Delphy (which I don't know)? or C# (which I know for web dev) or other?
Anyone done anything like this before? Got any advice?
Any and all information is appreciated.
D
The easiest solution might be to use an Arduino.
It's :
cheap (~ 30$)
easy to program
easy to connect to your PC (it use an USB cable which emulate a serial connection)
have a HUGE community with tons of tutorials for doing whatever you want
Here is an example how to control a led using C#