How can I use a .URDF sensor on a .SDF robot on ROS and Gazebo? - simulation

maybe it's a silly question but I am a beginner.
I have a Firmware of a flight controller called PX4, here it is: https://github.com/PX4/PX4-Autopilot
That Firmware contains SITL Gazebo tools so I can simulate in gazebo some models which are included in the Firmware in .sdf format. They are in Firmware/Tools/sitl_gazebo/models.
I also have a laser sensor which is a Teraranger Tower Evo, which is a laser sensor and has a description for Gazebo in .URDF and .XACRO, here it is: https://github.com/Terabee/teraranger_description
So, the idea is to join the sensor to one of those robots but I can't find how to do it.
Even I am not able to spawn the sensor in gazebo as standalone model or read data from it (I think it is because I am not spawning it correctly). I did a launcher to spawn a urdf model, it is added to gazebo but not the mesh and it does not do anything.
What occurred to me is to add laser sensors that behave as similar as possible to the one I want to use, but that would be in case I can't add the original one.
Thank you very much!

I answer myself,
The problem was that the .URDF file I was trying to add to my robot didn't has an tag, so Gazebo was just skipping that.
I used the .URDF of the repo, so I was assuming it was ok and the problem was that I didn't knew how to add it, but the problem was in the .URDF file of the repo.

Related

Visualizing an object in Rviz?

I am a beginner to ROS. My problem is adding an object in Rviz's scene. Specifically, the chessboard. I dont know where to start. And for the future, if I want to add the others, what should I do?
Distro: Noetic
OS: Ubuntu 20.04
Thanks
I already have a chessboard design in solidwork 2016 and want to add this object in rviz.
You need the Urdf exporter plugin.
http://wiki.ros.org/sw_urdf_exporter
You can access it via this link. This exporter helps you convert urdf file of your object and you can embed it in your ros file to visualize in gazebo or rviz

OTA LTE main.py update for GPy

In my internet research I often found how to update a main.py or the firmware with LoRa but with never how I can update my GPy with LTE.
In my thoughts I would do something like firstly in the boot.py setup the LTE and let it subscribe of MQTT where the new version of the program is. Then the GPy should start the update with the new file and get a new main.py.
Is this even possible or can I only update the main.py with LoRa? Next to this how could a code look like? I have to say I am quite new to the pycom and never used one before. What I can is simple micropython stuff for esp32.
Another person asked the same question like me but didnt get any helpful answer. Link to the question: OTA on Pycom gPy
Thank you for helping.

Unity post processing layer does not appear

I am very new to unity and I am trying to add post processing to a scene. I copied a tutorial, however when I added the post processing layer to my camera and went to change the layer to post processing, there was no post processing layer.
There should be a layer called post processing here, unless I did something wrong:
It doesn't seem like anyone else has had this problem but that could be because it's an easy fix.
I'd recommend restarting the project once, assuming you've correctly imported Post Processing from the Package Manager (To make sure the layer names load up correctly).
Also, the Layer name can be created by you, and assigned to desired GameObjects. So, the name "PostProcessing" as a layer doesn't really matter. you can just add in the layer name, if it doesn't appear, and continue as it was in the tutorial
You can assign it to Default if it doesn't appear (Unity will warn you, though).
PostProcessing layer is usually generated when you imported the package. If it's not there, try to use a project template that has PostProcessing in it while creating a new project.

Wwise, Resonance Audio and Unity Integration. Configure Wwise Resonance Audio Plugin

I have tried to get a response on Github but with no activity about this issue there I will ask here.
I have been following the documentation and I am stuck when I have imported the WwiseResonanceAudioRoom mixer effect on the bus in Wwise and I do not see anything in the properties. I am not sure if I am supposed to? Right after that part of the documentation is says "Note: If room properties are not configured, the room effects bus outputs silence." I was wondering if this was the case and yes it outputs silence. I even switched the effect to see if it will just pass audio and it does just not with the Room effect, so at least I know my routing is correct.
So now this leads up to my actual question. How do you configure the plugin?? I know there is some documentation but there is not one tutorial or a step by step for us non code savvy audio folk. I have spent the better half of my week trying to figure this out b/c frankly for the time being this is the only audio spatialization plugin that features both audio occlusion, obstruction and propagation within Wwise.
Any help is appreciated,
Thank you.
I had Room Effects with Resonance Audio working in another project last year, under its former name, GVR. There are no properties on the Room Effect itself. These effect settings and properties reside in the Unity Resonance prefabs.
I presume you've follow the latter tutorial on Room Effect here:
https://developers.google.com/resonance-audio/develop/wwise/getting-started
Then what you need to do is to add the Room Effect assets into your Unity project. The assets are found in the Resonance Audio zip package, next to the authoring and SDK files. Unzip the Unity stuff into your project, add a room Effect in your scene and you should be able to see the properties in the inspector of the room object?
Figured it out thanks to Egil Sandfeld Here ! https://github.com/resonance-audio/resonance-audio-wwise-sdk/issues/2#issuecomment-367225550
To elaborate I had the SDKs implemented but I went ahead and replaced them anyways and it worked!

Can both image target and object target be added one single database in unity vuforia?

I am developing an android app where I have to train my app to recognize two images and four objects.I created one single database where I added all the images and objects target in vuforia developer site and created the unity package. Now neither image nor object is getting recognized.
Probably the problem is the same for objects and images.
I think you should share some more info about what your doing as well as some meaningful code implementing it.
W/O that, I would suggest:
verify that the database and trackables are loaded and active # runtime
if so, see in console that the trackables are tracked by Vuforia
if so, verify the code enabling your augmentations
Please confirm whether have run trough these steps already and what results you got. I can share some code and further tips once the issue is a little but more specific.
Regards